This repository contains the source code for the paper:
LLA-FLOW: A Lightweight Local Aggregation on Cost Volume for Optical Flow Estimation
ICIP 2023
Jiawei Xu, Zongqing Lu and Qingmin Liao
Install the environment, and the code runs well under PyTorch 1.10.
conda env create -f requirements.yaml
Pretrained models can be downloaded from the Releases Page.
Place a sequence of frames in the ./demo_imgs
, run the script and you can view the results in ./demo_imgs/result
:
bash demo.sh
To train and evaluate the optical flow methods, you will need to download the required datasets.
The datasets folder should be placed in the root directory of the project.
├── datasets
├── FlyingChairs_release
├── data
├── FlyingThings3D
├── frames_cleanpass
├── frames_finalpass
├── optical_flow
├── Sintel
├── test
├── training
├── bundler
├── KITTI
├── testing
├── training
├── HD1K
├── hd1k_input
├── hd1k_flow_gt
Training requires two 12GB VRAM GPUs. If you include the GMA module, it might require two 16GB VRAM GPUs.
Training based on RAFT:
bash train_raft.sh
Training based on GMA:
bash train_gma.sh
Select the command from evaluate.sh
for model evaluation. For example:
python evaluate.py --model checkpoints/lla-raft-sintel.pth --mixed_precision --dataset sintel
The overall code framework is adapted from RAFT and GMA. We thank the authors for the contribution.