Comments (10)
No modification. For Caffe style models, we do not perform the division either.
from tsn-pytorch.
and I found that:
self.input_mean = [0.485, 0.456, 0.406] + [0] * 3 * self.new_length
self.input_std = self.input_std + [np.mean(self.input_std) * 2] * 3 * self.new_length
It seems that for each stack, the way you normalize the first frame is different from the other frames. Is there any reason not to normalize in the same way?
Thank you.
from tsn-pytorch.
-
RGBdiff is calculated by subtracting consecutive frames. There is no need for subtracting means on frames beforehand. We mostly experimented with Caffe style models so the pixels are used as is. If you need Torch-style input you may add the scaling back.
-
These lines are not used as described in 1. But I think it is a good idea do a clean-up of the code to remove the confusing parts.
from tsn-pytorch.
That makes sense. Thank you for clarifying.
So what you're saying is that for the RGBDiff model, we only need to divide the frame values by std (no need for subtracting mean). Is that correct?
from tsn-pytorch.
Yes. Subtracted means will be removed in computing differences. So there is no need to do it beforehand.
from tsn-pytorch.
One more question about the std. Did you just divide by the original input_std
for the RGBDiff model? Or you did some modification on it?
Thanks.
from tsn-pytorch.
I saw you have an argument keep_rgb
for the RGBDiff model and set as False
. What's the performance if you set keep_rgb
as True
? And how did you normalize that since you have both RGB and RGBDiff for inputs?
Thanks.
from tsn-pytorch.
It is just a simple trial without too much investigation, where we found marginal improvement. Perhaps its due to we only keep the first frame's RGB. We subtract means on the kept RGB. You are free to experiment with your own normalization.
from tsn-pytorch.
@yjxiong hi,
For generating the RGB diff images as the input to train the flow branch, you said we just directly subtract two consecutive frames. Does that mean, for example, I want to generate all the input images (frames, optical flow, rgb diff) from dense_flow code, I added the following in the dense_flow_gpu.cpp:
image_diff = capture_image - prev_image;
imencode(".jpg", image_diff, str_img);
Is this the correct way to generate the diff image? Do we need to set any bound (like optical flow one) to normalize the image_diff (-255255) to (0255) in dense_flow_gpu.cpp code?
After storing all the diff images, we run the RGB diff script to train the RGB diff branch, right?
Many thanks.
from tsn-pytorch.
@gwh0112
No. RGBDiff is generated on the fly during training. We can find the code for this cause in the repo.
from tsn-pytorch.
Related Issues (20)
- Error: return int(self._data[1]) IndexError: list index out of range
- what is the command of RGB and Flow modality fusion? HOT 2
- what is the command of RGB and Flow modality fusion? Thanks!! HOT 3
- How to fuse the scores from both RGB and FLOW on training and testing? HOT 1
- No module named 'mmaction.datasets.utils' HOT 1
- 请问经过模型后的输出是什么格式呢? HOT 2
- weried phenomenon about using main.py train ucf101 dataset HOT 1
- error about epoch print statement when i use the main.py to train with ucf101 dataset HOT 1
- RuntimeError: bool value of non-empty torch.cuda.ByteTensor objects is ambiguous
- getting 84% in ucf101 on split 1 HOT 2
- RuntimeError: Legacy autograd function HOT 1
- The score of rgb and flow
- Solved
- video live test?
- Training the TSN model on custom dataset - couldn't implement as mentioned in paper HOT 1
- Why subtract 'new_length' to calculate 'average_duration' ?
- Need pre-trained ActivityNet weights for Pytorch
- 报错求助magic_number = pickle_module.load(f, **pickle_load_args) _pickle.UnpicklingError: invalid load key, '<'.(困扰我很久了救救孩子吧)) HOT 2
- f**k-est implementation HOT 3
- mean() received an invalid combination of arguments - got (Tensor, list, keepdim=bool), but expected one of: * (Tensor input) * (Tensor input, torch.dtype dtype) * (Tensor input, int dim, torch.dtype dtype, Tensor out) * (Tensor input, int dim, bool keepdim, torch.dtype dtype, Tensor out) * (Tensor input, int dim, bool keepdim, Tensor out)
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from tsn-pytorch.