Giter VIP home page Giter VIP logo

tsgvs-mm2023's Issues

MAD dataset preprocessing feature requests!!!! Please

Hello author,
Can you conveniently provide the Google Cloud Disk or Baidu Cloud Disk link for downloading the MAD dataset?
I am unable to download the source MAD data set due to network restrictions.
Could you please send me a copy of the MAD processed feature data set? Thank you very much!!!

 I would be grateful if you could answer promptly and provide a dataset

reproducing doesn't work..

I couldn't resolve NCCL error.

<tools/dist_train_fixed.sh>

#!/usr/bin/env bash

CONFIG=$1
GPUS=$2
PORT=${PORT:-29500}

CUDA_VISIBLE_DEVICES="0,1"
NUM_CORES=8

export CUDA_VISIBLE_DEVICES
export NCCL_DEBUG=WARN
export OMP_NUM_THREADS=$NUM_CORES
export MKL_NUM_THREADS=$NUM_CORES

PYTHONPATH="$(dirname $0)/..":$PYTHONPATH \
torchrun --nproc_per_node=$GPUS --master_port=$PORT \
    $(dirname "$0")/train.py $CONFIG --launcher pytorch ${@:3}

$ ./tools/dist_train_fixed.sh configs/tpn/benchmark/tacos_mtl_16_tpn_dec1_rnn_dot_s8_l64_b8*64_kd02.py 8 --validate --test-best
2024-03-16 20:54:52,363 - tsgv - INFO - Environment info:


sys.platform: linux
Python: 3.8.18 (default, Sep 11 2023, 13:20:55) [GCC 11.2.0]
CUDA available: True
GPU 0,1: NVIDIA RTX A6000
CUDA_HOME: /usr
NVCC: Cuda compilation tools, release 10.1, V10.1.24
GCC: gcc (conda-forge gcc 13.2.0-5) 13.2.0
PyTorch: 1.12.0+cu116
PyTorch compiling details: PyTorch built with:

  • GCC 9.3
  • C++ Version: 201402
  • Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
  • Intel(R) MKL-DNN v2.6.0 (Git Hash 52b5f107dd9cf10910aaa19cb47f3abf9b349815)
  • OpenMP 201511 (a.k.a. OpenMP 4.5)
  • LAPACK is enabled (usually provided by MKL)
  • NNPACK is enabled
  • CPU capability usage: AVX2
  • CUDA Runtime 11.6
  • NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86
  • CuDNN 8.3.2 (built against CUDA 11.5)
  • Magma 2.6.1
  • Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.6, CUDNN_VERSION=8.3.2, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.12.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=OFF, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF,

TorchVision: 0.13.0+cu116
OpenCV: 4.9.0
MMCV: 1.6.0
MMCV Compiler: GCC 9.3
MMCV CUDA Compiler: 11.6
TSGV: 0.0.0+13845ee


2024-03-16 20:54:52,363 - tsgv - INFO - Distributed training: True
2024-03-16 20:54:52,636 - tsgv - INFO - Config: checkpoint_config = dict(interval=1)
log_config = dict(
interval=100,
hooks=[dict(type='TextLoggerHook'),
dict(type='TensorboardLoggerHook')])
dist_params = dict(backend='nccl')
log_level = 'INFO'
load_from = None
resume_from = None
workflow = [('train', 1)]
opencv_num_threads = 0
mp_start_method = 'fork'
short_memory_sample_length = 8
long_memory_sample_length = 64
future_memory_length = 64
short_memory_stride = 1
long_memory_stride = 1
future_memory_stride = 1
model = dict(
type='BaseStreamingTSGV',
train_aux_info=['video_name', 'anno_framestamp', 'memory_framestamp'],
sentence_encoder=dict(
type='RNNSentenceEncoder',
sent_dim=768,
hidden_dim=1024,
num_layers=2,
rnn_type='GRU',
dropout=0.5,
bidirectional=True,
pool_strategy=None),
video_encoder=dict(
type='BaseStreamingVideoEncoder',
video_dim=4096,
hidden_dim=1024,
use_long=True,
use_future=True,
norm_after=False),
interactor=dict(
type='TPNInteractor',
hidden_dim=1024,
feedforward_dim=1024,
short_memory_length=8,
long_memory_length=64,
future_memory_length=64,
memory_compressor='MultimodalTokenLearner',
num_compressor_tokens=[16],
num_decoder_layers=1,
sent_norm_before=False,
num_heads=8,
dropout=0.2,
future_usage='TemporalPincer'),
predictor=dict(
type='DotProductPredictorTPN',
hidden_dim=1024,
tau=16.0,
gamma=3.0,
num_layers=1,
tpn_type='KD',
tpn_distillation_weight=0.2))
dataset_regenerate = True
dataset_type = 'OnlineTSGVDataset'
data_root = 'data/tacos/feature/'
data_root_val = 'data/tacos/feature/'
ann_file_train = 'data/tacos/train.json'
ann_file_val = 'data/tacos/val.json'
ann_file_test = 'data/tacos/test.json'
train_pipeline = [
dict(
type='Collect',
keys=[
'short_memories', 'short_memory_masks', 'sentence_features',
'sentence_length', 'long_memories', 'long_memory_masks',
'future_memories', 'future_memory_masks', 'start_label',
'end_label', 'semantic_label', 'video_name', 'anno_framestamp',
'memory_framestamp'
],
meta_keys=(),
meta_name='tsgv_metas'),
dict(
type='ToDataContainer',
fields=[
dict(key='video_name', cpu_only=True),
dict(key='anno_framestamp', cpu_only=True),
dict(key='memory_framestamp', cpu_only=True)
]),
dict(
type='ToTensor',
keys=[
'short_memories', 'short_memory_masks', 'sentence_features',
'sentence_length', 'long_memories', 'long_memory_masks',
'future_memories', 'future_memory_masks', 'start_label',
'end_label', 'semantic_label'
])
]
val_pipeline = [
dict(
type='Collect',
keys=[
'short_memories', 'short_memory_masks', 'sentence_features',
'sentence_length', 'long_memories', 'long_memory_masks'
],
meta_keys=(),
meta_name='tsgv_metas'),
dict(
type='ToTensor',
keys=[
'short_memories', 'short_memory_masks', 'sentence_features',
'sentence_length', 'long_memories', 'long_memory_masks'
])
]
test_pipeline = [
dict(
type='Collect',
keys=[
'short_memories', 'short_memory_masks', 'sentence_features',
'sentence_length', 'long_memories', 'long_memory_masks'
],
meta_keys=(),
meta_name='tsgv_metas'),
dict(
type='ToTensor',
keys=[
'short_memories', 'short_memory_masks', 'sentence_features',
'sentence_length', 'long_memories', 'long_memory_masks'
])
]
data = dict(
videos_per_gpu=64,
workers_per_gpu=6,
val_dataloader=dict(videos_per_gpu=128, pin_memory=True),
test_dataloader=dict(videos_per_gpu=128, pin_memory=True),
train=dict(
type='OnlineTSGVDataset',
ann_file=['data/tacos/train.json', 'data/tacos/val.json'],
pipeline=[
dict(
type='Collect',
keys=[
'short_memories', 'short_memory_masks',
'sentence_features', 'sentence_length', 'long_memories',
'long_memory_masks', 'future_memories',
'future_memory_masks', 'start_label', 'end_label',
'semantic_label', 'video_name', 'anno_framestamp',
'memory_framestamp'
],
meta_keys=(),
meta_name='tsgv_metas'),
dict(
type='ToDataContainer',
fields=[
dict(key='video_name', cpu_only=True),
dict(key='anno_framestamp', cpu_only=True),
dict(key='memory_framestamp', cpu_only=True)
]),
dict(
type='ToTensor',
keys=[
'short_memories', 'short_memory_masks',
'sentence_features', 'sentence_length', 'long_memories',
'long_memory_masks', 'future_memories',
'future_memory_masks', 'start_label', 'end_label',
'semantic_label'
])
],
short_memory_sample_length=8,
data_prefix='data/tacos/feature/',
split='train&val',
video_feat_filename='tall_c3d_features.hdf5',
long_memory_sample_length=64,
short_memory_stride=1,
long_memory_stride=1,
future_memory_sample_length=64,
future_memory_stride=1,
load_future_memory=True,
gaussian_label=True),
val=dict(
type='OnlineTSGVDataset',
ann_file='data/tacos/test.json',
pipeline=[
dict(
type='Collect',
keys=[
'short_memories', 'short_memory_masks',
'sentence_features', 'sentence_length', 'long_memories',
'long_memory_masks'
],
meta_keys=(),
meta_name='tsgv_metas'),
dict(
type='ToTensor',
keys=[
'short_memories', 'short_memory_masks',
'sentence_features', 'sentence_length', 'long_memories',
'long_memory_masks'
])
],
short_memory_sample_length=8,
data_prefix='data/tacos/feature/',
split='test',
video_feat_filename='tall_c3d_features.hdf5',
long_memory_sample_length=64,
short_memory_stride=1,
long_memory_stride=1,
portion=0.5),
test=dict(
type='OnlineTSGVDataset',
ann_file='data/tacos/test.json',
pipeline=[
dict(
type='Collect',
keys=[
'short_memories', 'short_memory_masks',
'sentence_features', 'sentence_length', 'long_memories',
'long_memory_masks'
],
meta_keys=(),
meta_name='tsgv_metas'),
dict(
type='ToTensor',
keys=[
'short_memories', 'short_memory_masks',
'sentence_features', 'sentence_length', 'long_memories',
'long_memory_masks'
])
],
short_memory_sample_length=8,
data_prefix='data/tacos/feature/',
split='test',
video_feat_filename='tall_c3d_features.hdf5',
long_memory_sample_length=64,
short_memory_stride=1,
long_memory_stride=1))
evaluation = dict(
interval=1,
metrics=['R@N,IoU=M', 'mcAP'],
metric_options=dict({
'R@N,IoU=M':
dict(recall_at=[1, 5], iou_at=[0.3, 0.5, 0.7], nms_thresh=0.5)
}),
save_best='R@1,IoU=0.5',
rule='greater')
eval_config = dict(
metrics=['R@N,IoU=M', 'mcAP'],
metric_options=dict(
{'R@N,IoU=M': dict(recall_at=[1, 5], iou_at=[0.3, 0.5, 0.7])}))
optimizer = dict(type='AdamW', lr=3e-05, weight_decay=0.0005)
optimizer_config = dict(grad_clip=dict(max_norm=20, norm_type=2))
lr_config = dict(
policy='FlatCosineAnnealing',
by_epoch=False,
warmup='linear',
warmup_iters=4,
warmup_ratio=0.1,
warmup_by_epoch=True,
start_percent=0.4,
min_lr=0.0)
total_epochs = 10
work_dir = './work_dirs/tacos_mtl_16_tpn_dec1_rnn_dot_s8_l64_b8*64_kd02'
gpu_ids = range(0, 8)
module_hooks = []

NCCL version 2.10.3+cuda11.6

oem-WS-C621E-SAGE-Series:455454:455716 [1] init.cc:521 NCCL WARN Duplicate GPU detected : rank 7 and rank 1 both on CUDA device b3000

oem-WS-C621E-SAGE-Series:455447:455713 [0] init.cc:521 NCCL WARN Duplicate GPU detected : rank 0 and rank 2 both on CUDA device 65000

oem-WS-C621E-SAGE-Series:455448:455714 [1] init.cc:521 NCCL WARN Duplicate GPU detected : rank 1 and rank 3 both on CUDA device b3000

oem-WS-C621E-SAGE-Series:455449:455717 [0] init.cc:521 NCCL WARN Duplicate GPU detected : rank 2 and rank 0 both on CUDA device 65000

oem-WS-C621E-SAGE-Series:455450:455720 [1] init.cc:521 NCCL WARN Duplicate GPU detected : rank 3 and rank 1 both on CUDA device b3000

oem-WS-C621E-SAGE-Series:455451:455715 [0] init.cc:521 NCCL WARN Duplicate GPU detected : rank 4 and rank 0 both on CUDA device 65000

oem-WS-C621E-SAGE-Series:455452:455718 [1] init.cc:521 NCCL WARN Duplicate GPU detected : rank 5 and rank 1 both on CUDA device b3000

oem-WS-C621E-SAGE-Series:455453:455719 [0] init.cc:521 NCCL WARN Duplicate GPU detected : rank 6 and rank 0 both on CUDA device 65000
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
File "./tools/train.py", line 207, in
Traceback (most recent call last):
Traceback (most recent call last):
File "./tools/train.py", line 207, in
File "./tools/train.py", line 207, in
Traceback (most recent call last):
File "./tools/train.py", line 207, in
File "./tools/train.py", line 207, in
Traceback (most recent call last):
File "./tools/train.py", line 207, in
Traceback (most recent call last):
File "./tools/train.py", line 207, in
File "./tools/train.py", line 207, in
main()main()main()
main()

  File "./tools/train.py", line 158, in main

  File "./tools/train.py", line 158, in main

main()main() File "./tools/train.py", line 158, in main
File "./tools/train.py", line 158, in main
main()

main() File "./tools/train.py", line 158, in main

File "./tools/train.py", line 158, in main

File "./tools/train.py", line 158, in main
File "./tools/train.py", line 158, in main
seed = init_random_seed(args.seed, distributed=distributed)
File "/data/gyeongjin-git/TSGVs-MM2023/tsgv/apis/train.py", line 53, in init_random_seed
seed = init_random_seed(args.seed, distributed=distributed)
File "/data/gyeongjin-git/TSGVs-MM2023/tsgv/apis/train.py", line 53, in init_random_seed
seed = init_random_seed(args.seed, distributed=distributed)seed = init_random_seed(args.seed, distributed=distributed)

seed = init_random_seed(args.seed, distributed=distributed)  File "/data/gyeongjin-git/TSGVs-MM2023/tsgv/apis/train.py", line 53, in init_random_seed

File "/data/gyeongjin-git/TSGVs-MM2023/tsgv/apis/train.py", line 53, in init_random_seed

seed = init_random_seed(args.seed, distributed=distributed) File "/data/gyeongjin-git/TSGVs-MM2023/tsgv/apis/train.py", line 53, in init_random_seed

seed = init_random_seed(args.seed, distributed=distributed)seed = init_random_seed(args.seed, distributed=distributed)
File "/data/gyeongjin-git/TSGVs-MM2023/tsgv/apis/train.py", line 53, in init_random_seed

dist.broadcast(random_num, src=0)  File "/data/gyeongjin-git/TSGVs-MM2023/tsgv/apis/train.py", line 53, in init_random_seed

dist.broadcast(random_num, src=0) File "/data/gyeongjin-git/TSGVs-MM2023/tsgv/apis/train.py", line 53, in init_random_seed

      File "/home/gyeongjinkim/.conda/envs/tsgv/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 1193, in broadcast

dist.broadcast(random_num, src=0) File "/home/gyeongjinkim/.conda/envs/tsgv/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 1193, in broadcast
dist.broadcast(random_num, src=0)

dist.broadcast(random_num, src=0) File "/home/gyeongjinkim/.conda/envs/tsgv/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 1193, in broadcast

File "/home/gyeongjinkim/.conda/envs/tsgv/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 1193, in broadcast
dist.broadcast(random_num, src=0) File "/home/gyeongjinkim/.conda/envs/tsgv/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 1193, in broadcast
dist.broadcast(random_num, src=0)

dist.broadcast(random_num, src=0)
File "/home/gyeongjinkim/.conda/envs/tsgv/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 1193, in broadcast
File "/home/gyeongjinkim/.conda/envs/tsgv/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 1193, in broadcast
File "/home/gyeongjinkim/.conda/envs/tsgv/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 1193, in broadcast
work = default_pg.broadcast([tensor], opts)work = default_pg.broadcast([tensor], opts)

RuntimeError RuntimeErrorwork = default_pg.broadcast([tensor], opts):
: NCCL error in: ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1191, invalid usage, NCCL version 2.10.3
ncclInvalidUsage: This usually reflects invalid usage of NCCL library (such as too many async ops, too many collectives at once, mixing streams in a group, etc).NCCL error in: ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1191, invalid usage, NCCL version 2.10.3
ncclInvalidUsage: This usually reflects invalid usage of NCCL library (such as too many async ops, too many collectives at once, mixing streams in a group, etc).work = default_pg.broadcast([tensor], opts)RuntimeError

: work = default_pg.broadcast([tensor], opts) NCCL error in: ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1191, invalid usage, NCCL version 2.10.3
ncclInvalidUsage: This usually reflects invalid usage of NCCL library (such as too many async ops, too many collectives at once, mixing streams in a group, etc).work = default_pg.broadcast([tensor], opts)RuntimeError
work = default_pg.broadcast([tensor], opts)work = default_pg.broadcast([tensor], opts)

:

RuntimeErrorNCCL error in: ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1191, invalid usage, NCCL version 2.10.3
ncclInvalidUsage: This usually reflects invalid usage of NCCL library (such as too many async ops, too many collectives at once, mixing streams in a group, etc).: RuntimeError
RuntimeErrorRuntimeErrorNCCL error in: ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1191, invalid usage, NCCL version 2.10.3
ncclInvalidUsage: This usually reflects invalid usage of NCCL library (such as too many async ops, too many collectives at once, mixing streams in a group, etc).: : :
NCCL error in: ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1191, invalid usage, NCCL version 2.10.3
ncclInvalidUsage: This usually reflects invalid usage of NCCL library (such as too many async ops, too many collectives at once, mixing streams in a group, etc).NCCL error in: ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1191, invalid usage, NCCL version 2.10.3
ncclInvalidUsage: This usually reflects invalid usage of NCCL library (such as too many async ops, too many collectives at once, mixing streams in a group, etc).NCCL error in: ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1191, invalid usage, NCCL version 2.10.3
ncclInvalidUsage: This usually reflects invalid usage of NCCL library (such as too many async ops, too many collectives at once, mixing streams in a group, etc).

ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 455447) of binary: /home/gyeongjinkim/.conda/envs/tsgv/bin/python
Traceback (most recent call last):
File "/home/gyeongjinkim/.conda/envs/tsgv/bin/torchrun", line 8, in
sys.exit(main())
File "/home/gyeongjinkim/.conda/envs/tsgv/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/init.py", line 345, in wrapper
return f(*args, **kwargs)
File "/home/gyeongjinkim/.conda/envs/tsgv/lib/python3.8/site-packages/torch/distributed/run.py", line 761, in main
run(args)
File "/home/gyeongjinkim/.conda/envs/tsgv/lib/python3.8/site-packages/torch/distributed/run.py", line 752, in run
elastic_launch(
File "/home/gyeongjinkim/.conda/envs/tsgv/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 131, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/gyeongjinkim/.conda/envs/tsgv/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 245, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

============================================================
./tools/train.py FAILED


Failures:
[1]:
time : 2024-03-16_20:55:00
host : oem-WS-C621E-SAGE-Series
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 455448)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[2]:
time : 2024-03-16_20:55:00
host : oem-WS-C621E-SAGE-Series
rank : 2 (local_rank: 2)
exitcode : 1 (pid: 455449)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[3]:
time : 2024-03-16_20:55:00
host : oem-WS-C621E-SAGE-Series
rank : 3 (local_rank: 3)
exitcode : 1 (pid: 455450)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[4]:
time : 2024-03-16_20:55:00
host : oem-WS-C621E-SAGE-Series
rank : 4 (local_rank: 4)
exitcode : 1 (pid: 455451)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[5]:
time : 2024-03-16_20:55:00
host : oem-WS-C621E-SAGE-Series
rank : 5 (local_rank: 5)
exitcode : 1 (pid: 455452)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[6]:
time : 2024-03-16_20:55:00
host : oem-WS-C621E-SAGE-Series
rank : 6 (local_rank: 6)
exitcode : 1 (pid: 455453)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[7]:
time : 2024-03-16_20:55:00
host : oem-WS-C621E-SAGE-Series
rank : 7 (local_rank: 7)
exitcode : 1 (pid: 455454)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html


Root Cause (first observed failure):
[0]:
time : 2024-03-16_20:55:00
host : oem-WS-C621E-SAGE-Series
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 455447)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

============================================================

encoder、interactor、Predictor,Problem

The source code of sentenceencoder, videoencoder, interactor and Predictor only gives the definition of the TPN part, like 2DTAN, SeqPAN, SMIN, VSLNet will have some like TANVideoEncoder, TANInteractor, TANPredictor, SeqPANInteractor, SeqPANPredictor, SMINInteractor, SMINPredictor, VSLNetInteractor, all of these are not defined, the author can you easily provide the baseline corresponding code files?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.