Giter VIP home page Giter VIP logo

chenhongruixuan / mambacd Goto Github PK

View Code? Open in Web Editor NEW
305.0 3.0 11.0 64.3 MB

[IEEE TGRS 2024] ChangeMamba: Remote Sensing Change Detection Based on Spatio-Temporal State Space Model

License: Apache License 2.0

Python 83.48% Shell 1.28% C++ 4.71% Cuda 10.21% C 0.32%
change-detection mamba remote-sensing spatio-temporal-modeling state-space-model building-damage-assessment semantic-change-detection cd-mamba change-mamba changemamba

mambacd's People

Contributors

chengxihan avatar chenhongruixuan avatar jtrneo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

mambacd's Issues

ERROR: Command errored out with exit status 1:

你好,我在进行cd kernels/selective_scan && pip install .时报如下错误,希望您能解决下
Requirement already satisfied: torch in /opt/conda/lib/python3.8/site-packages (from selective-scan==0.0.2) (2.3.1)
Requirement already satisfied: packaging in /opt/conda/lib/python3.8/site-packages (from selective-scan==0.0.2) (21.3)
Requirement already satisfied: ninja in /opt/conda/lib/python3.8/site-packages (from selective-scan==0.0.2) (1.11.1.1)
Requirement already satisfied: einops in /opt/conda/lib/python3.8/site-packages (from selective-scan==0.0.2) (0.3.0)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /opt/conda/lib/python3.8/site-packages (from packaging->selective-scan==0.0.2) (3.0.9)
Requirement already satisfied: nvidia-cusparse-cu12==12.1.0.106 in /opt/conda/lib/python3.8/site-packages (from torch->selective-scan==0.0.2) (12.1.0.106)
Requirement already satisfied: nvidia-nvtx-cu12==12.1.105 in /opt/conda/lib/python3.8/site-packages (from torch->selective-scan==0.0.2) (12.1.105)
Requirement already satisfied: jinja2 in /opt/conda/lib/python3.8/site-packages (from torch->selective-scan==0.0.2) (3.1.2)
Requirement already satisfied: nvidia-cuda-cupti-cu12==12.1.105 in /opt/conda/lib/python3.8/site-packages (from torch->selective-scan==0.0.2) (12.1.105)
Requirement already satisfied: nvidia-cufft-cu12==11.0.2.54 in /opt/conda/lib/python3.8/site-packages (from torch->selective-scan==0.0.2) (11.0.2.54)
Requirement already satisfied: triton==2.3.1 in /opt/conda/lib/python3.8/site-packages (from torch->selective-scan==0.0.2) (2.3.1)
Requirement already satisfied: nvidia-cublas-cu12==12.1.3.1 in /opt/conda/lib/python3.8/site-packages (from torch->selective-scan==0.0.2) (12.1.3.1)
Requirement already satisfied: nvidia-cudnn-cu12==8.9.2.26 in /opt/conda/lib/python3.8/site-packages (from torch->selective-scan==0.0.2) (8.9.2.26)
Requirement already satisfied: networkx in /opt/conda/lib/python3.8/site-packages (from torch->selective-scan==0.0.2) (3.1)
Requirement already satisfied: nvidia-cusolver-cu12==11.4.5.107 in /opt/conda/lib/python3.8/site-packages (from torch->selective-scan==0.0.2) (11.4.5.107)
Requirement already satisfied: typing-extensions>=4.8.0 in /opt/conda/lib/python3.8/site-packages (from torch->selective-scan==0.0.2) (4.12.2)
Requirement already satisfied: filelock in /opt/conda/lib/python3.8/site-packages (from torch->selective-scan==0.0.2) (3.7.1)
Requirement already satisfied: nvidia-nccl-cu12==2.20.5 in /opt/conda/lib/python3.8/site-packages (from torch->selective-scan==0.0.2) (2.20.5)
Requirement already satisfied: nvidia-cuda-nvrtc-cu12==12.1.105 in /opt/conda/lib/python3.8/site-packages (from torch->selective-scan==0.0.2) (12.1.105)
Requirement already satisfied: fsspec in /opt/conda/lib/python3.8/site-packages (from torch->selective-scan==0.0.2) (2022.7.1)
Requirement already satisfied: sympy in /opt/conda/lib/python3.8/site-packages (from torch->selective-scan==0.0.2) (1.12)
Requirement already satisfied: nvidia-cuda-runtime-cu12==12.1.105 in /opt/conda/lib/python3.8/site-packages (from torch->selective-scan==0.0.2) (12.1.105)
Requirement already satisfied: nvidia-curand-cu12==10.3.2.106 in /opt/conda/lib/python3.8/site-packages (from torch->selective-scan==0.0.2) (10.3.2.106)
Requirement already satisfied: nvidia-nvjitlink-cu12 in /opt/conda/lib/python3.8/site-packages (from nvidia-cusolver-cu12==11.4.5.107->torch->selective-scan==0.0.2) (12.5.82)
Requirement already satisfied: MarkupSafe>=2.0 in /opt/conda/lib/python3.8/site-packages (from jinja2->torch->selective-scan==0.0.2) (2.1.1)
Requirement already satisfied: mpmath>=0.19 in /opt/conda/lib/python3.8/site-packages (from sympy->torch->selective-scan==0.0.2) (1.3.0)
Building wheels for collected packages: selective-scan
Building wheel for selective-scan (setup.py) ... error
ERROR: Command errored out with exit status 1:
command: /opt/conda/bin/python3.8 -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-f9_8c9in/setup.py'"'"'; file='"'"'/tmp/pip-req-build-f9_8c9in/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-6mo72nd0
cwd: /tmp/pip-req-build-f9_8c9in/
Complete output (51 lines):

torch.version = 2.3.1+cu121

CUDA_HOME = /usr/local/cuda

running bdist_wheel
running build
running build_ext
Traceback (most recent call last):
File "", line 1, in
File "/tmp/pip-req-build-f9_8c9in/setup.py", line 140, in
setup(
File "/opt/conda/lib/python3.8/site-packages/setuptools/init.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/opt/conda/lib/python3.8/distutils/core.py", line 148, in setup
dist.run_commands()
File "/opt/conda/lib/python3.8/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/opt/conda/lib/python3.8/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/opt/conda/lib/python3.8/site-packages/wheel/bdist_wheel.py", line 299, in run
self.run_command('build')
File "/opt/conda/lib/python3.8/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/opt/conda/lib/python3.8/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/opt/conda/lib/python3.8/distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/opt/conda/lib/python3.8/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/opt/conda/lib/python3.8/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/opt/conda/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 79, in run
_build_ext.run(self)
File "/opt/conda/lib/python3.8/site-packages/Cython/Distutils/old_build_ext.py", line 186, in run
_build_ext.build_ext.run(self)
File "/opt/conda/lib/python3.8/distutils/command/build_ext.py", line 340, in run
self.build_extensions()
File "/opt/conda/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 522, in build_extensions
_check_cuda_version(compiler_name, compiler_version)
File "/opt/conda/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 417, in _check_cuda_version
raise RuntimeError(CUDA_MISMATCH_MESSAGE.format(cuda_str_version, torch.version.cuda))
RuntimeError:
The detected CUDA version (11.7) mismatches the version that was used to compile
PyTorch (12.1). Please make sure to use the same CUDA versions.


ERROR: Failed building wheel for selective-scan
Running setup.py clean for selective-scan
Failed to build selective-scan
Installing collected packages: selective-scan
Running setup.py install for selective-scan ... error
ERROR: Command errored out with exit status 1:
command: /opt/conda/bin/python3.8 -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-f9_8c9in/setup.py'"'"'; file='"'"'/tmp/pip-req-build-f9_8c9in/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record /tmp/pip-record-ci0937mt/install-record.txt --single-version-externally-managed --compile --install-headers /opt/conda/include/python3.8/selective-scan
cwd: /tmp/pip-req-build-f9_8c9in/
Complete output (55 lines):

torch.__version__  = 2.3.1+cu121




CUDA_HOME = /usr/local/cuda


running install
/opt/conda/lib/python3.8/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
  warnings.warn(
running build
running build_ext
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/tmp/pip-req-build-f9_8c9in/setup.py", line 140, in <module>
    setup(
  File "/opt/conda/lib/python3.8/site-packages/setuptools/__init__.py", line 153, in setup
    return distutils.core.setup(**attrs)
  File "/opt/conda/lib/python3.8/distutils/core.py", line 148, in setup
    dist.run_commands()
  File "/opt/conda/lib/python3.8/distutils/dist.py", line 966, in run_commands
    self.run_command(cmd)
  File "/opt/conda/lib/python3.8/distutils/dist.py", line 985, in run_command
    cmd_obj.run()
  File "/opt/conda/lib/python3.8/site-packages/setuptools/command/install.py", line 68, in run
    return orig.install.run(self)
  File "/opt/conda/lib/python3.8/distutils/command/install.py", line 545, in run
    self.run_command('build')
  File "/opt/conda/lib/python3.8/distutils/cmd.py", line 313, in run_command
    self.distribution.run_command(command)
  File "/opt/conda/lib/python3.8/distutils/dist.py", line 985, in run_command
    cmd_obj.run()
  File "/opt/conda/lib/python3.8/distutils/command/build.py", line 135, in run
    self.run_command(cmd_name)
  File "/opt/conda/lib/python3.8/distutils/cmd.py", line 313, in run_command
    self.distribution.run_command(command)
  File "/opt/conda/lib/python3.8/distutils/dist.py", line 985, in run_command
    cmd_obj.run()
  File "/opt/conda/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 79, in run
    _build_ext.run(self)
  File "/opt/conda/lib/python3.8/site-packages/Cython/Distutils/old_build_ext.py", line 186, in run
    _build_ext.build_ext.run(self)
  File "/opt/conda/lib/python3.8/distutils/command/build_ext.py", line 340, in run
    self.build_extensions()
  File "/opt/conda/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 522, in build_extensions
    _check_cuda_version(compiler_name, compiler_version)
  File "/opt/conda/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 417, in _check_cuda_version
    raise RuntimeError(CUDA_MISMATCH_MESSAGE.format(cuda_str_version, torch.version.cuda))
RuntimeError:
The detected CUDA version (11.7) mismatches the version that was used to compile
PyTorch (12.1). Please make sure to use the same CUDA versions.

----------------------------------------

ERROR: Command errored out with exit status 1: /opt/conda/bin/python3.8 -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-f9_8c9in/setup.py'"'"'; file='"'"'/tmp/pip-req-build-f9_8c9in/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record /tmp/pip-record-ci0937mt/install-record.txt --single-version-externally-managed --compile --install-headers /opt/conda/include/python3.8/selective-scan Check the logs for full command output.

SiameseKPConv 和MambaCD对比

非常感谢您的指导,成功复现MambaCD而且在2D变化检测达到sota,因为自己刚接触这块领域,关注到在3D提出了SiameseKPConv ,不知道是否针对这两个模型进行过对比呢

The third layer of the network in the BCD task

1719302746976
Hello, I noticed that the number of layers of the small model introduced in the paper are (2, 2, 27, 2),But the configuration file of small is (2, 2, 15, 2),Do I need to modify the configuration file?
1

validation dataset cropsize

Thank you very much for your efforts. I have observed that while running train_MambaSCD.py, the dataset size during the validation stage was not cropped. I would like to inquire about the data processing procedures employed during the validation phase.

关于SYSU测试集的问题

非常感谢这篇MambaCD的工作,我在复现代码的时候发现,代码并没有用到验证集,而是在每个固定的iteration进行了测试集的评估。如:
在sh中指定的是:
--test_dataset_path '<dataset_path>/SYSU/test'
是测试集的路径,而不是验证集
在训练脚本中:
if (itera + 1) % 10 == 0:
print(f'iter is {itera + 1}, overall loss is {final_loss}')
if (itera + 1) % 500 == 0:
self.deep_model.eval()
rec, pre, oa, f1_score, iou, kc = self.validation()
if kc > best_kc:
torch.save(self.deep_model.state_dict(),
os.path.join(self.model_save_path, f'{itera + 1}_model.pth'))
best_kc = kc
best_round = [rec, pre, oa, f1_score, iou, kc]
self.deep_model.train()

print('The accuracy of the best round is ', best_round)

看上去像是在测试集中找到最好的performance,请问论文中报告的performance是否是用这种方式找到的呢?
非常感谢

Could not build wheels for selective_scan, which is required to install pyproject.toml-based projects

Hello, I encountered the following error while trying to install. Can you try to resolve it?
Processing /home/hhy/下载/VMamba-main/kernels/selective_scan
Preparing metadata (setup.py) ... done
Requirement already satisfied: torch in /home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages (from selective_scan==0.0.2) (2.1.1+cu118)
Requirement already satisfied: packaging in /home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages (from selective_scan==0.0.2) (23.2)
Requirement already satisfied: ninja in /home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages (from selective_scan==0.0.2) (1.11.1.1)
Requirement already satisfied: einops in /home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages (from selective_scan==0.0.2) (0.7.0)
Requirement already satisfied: filelock in /home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages (from torch->selective_scan==0.0.2) (3.9.0)
Requirement already satisfied: typing-extensions in /home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages (from torch->selective_scan==0.0.2) (4.8.0)
Requirement already satisfied: sympy in /home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages (from torch->selective_scan==0.0.2) (1.12)
Requirement already satisfied: networkx in /home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages (from torch->selective_scan==0.0.2) (3.2.1)
Requirement already satisfied: jinja2 in /home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages (from torch->selective_scan==0.0.2) (3.1.2)
Requirement already satisfied: fsspec in /home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages (from torch->selective_scan==0.0.2) (2024.3.1)
Requirement already satisfied: triton==2.1.0 in /home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages (from torch->selective_scan==0.0.2) (2.1.0)
Requirement already satisfied: MarkupSafe>=2.0 in /home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages (from jinja2->torch->selective_scan==0.0.2) (2.1.3)
Requirement already satisfied: mpmath>=0.19 in /home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages (from sympy->torch->selective_scan==0.0.2) (1.3.0)
Building wheels for collected packages: selective_scan
Building wheel for selective_scan (setup.py) ... error
error: subprocess-exited-with-error

× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [118 lines of output]

  torch.__version__  = 2.1.1+cu118
  
  
  
  
  CUDA_HOME = /home/hhy/anaconda3/envs/mamba
  
  
  running bdist_wheel
  running build
  running build_ext
  /home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/utils/cpp_extension.py:424: UserWarning: There are no g++ version bounds defined for CUDA version 11.8
    warnings.warn(f'There are no {compiler_name} version bounds defined for CUDA version {cuda_str_version}')
  building 'selective_scan_cuda_core' extension
  creating /home/hhy/下载/VMamba-main/kernels/selective_scan/build
  creating /home/hhy/下载/VMamba-main/kernels/selective_scan/build/temp.linux-x86_64-cpython-310
  creating /home/hhy/下载/VMamba-main/kernels/selective_scan/build/temp.linux-x86_64-cpython-310/csrc
  creating /home/hhy/下载/VMamba-main/kernels/selective_scan/build/temp.linux-x86_64-cpython-310/csrc/selective_scan
  creating /home/hhy/下载/VMamba-main/kernels/selective_scan/build/temp.linux-x86_64-cpython-310/csrc/selective_scan/cus
  Emitting ninja build file /home/hhy/下载/VMamba-main/kernels/selective_scan/build/temp.linux-x86_64-cpython-310/build.ninja...
  Compiling objects...
  Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
  [1/3] c++ -MMD -MF '/home/hhy/下载/VMamba-main/kernels/selective_scan/build/temp.linux-x86_64-cpython-310/csrc/selective_scan/cus/selective_scan.o'.d -pthread -B /home/hhy/anaconda3/envs/mamba/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /home/hhy/anaconda3/envs/mamba/include -fPIC -O2 -isystem /home/hhy/anaconda3/envs/mamba/include -fPIC '-I/home/hhy/下载/VMamba-main/kernels/selective_scan/csrc/selective_scan' -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include/TH -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include/THC -I/home/hhy/anaconda3/envs/mamba/include -I/home/hhy/anaconda3/envs/mamba/include/python3.10 -c -c '/home/hhy/下载/VMamba-main/kernels/selective_scan/csrc/selective_scan/cus/selective_scan.cpp' -o '/home/hhy/下载/VMamba-main/kernels/selective_scan/build/temp.linux-x86_64-cpython-310/csrc/selective_scan/cus/selective_scan.o' -O3 -std=c++17 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=selective_scan_cuda_core -D_GLIBCXX_USE_CXX11_ABI=0
  FAILED: /home/hhy/下载/VMamba-main/kernels/selective_scan/build/temp.linux-x86_64-cpython-310/csrc/selective_scan/cus/selective_scan.o
  c++ -MMD -MF '/home/hhy/下载/VMamba-main/kernels/selective_scan/build/temp.linux-x86_64-cpython-310/csrc/selective_scan/cus/selective_scan.o'.d -pthread -B /home/hhy/anaconda3/envs/mamba/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /home/hhy/anaconda3/envs/mamba/include -fPIC -O2 -isystem /home/hhy/anaconda3/envs/mamba/include -fPIC '-I/home/hhy/下载/VMamba-main/kernels/selective_scan/csrc/selective_scan' -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include/TH -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include/THC -I/home/hhy/anaconda3/envs/mamba/include -I/home/hhy/anaconda3/envs/mamba/include/python3.10 -c -c '/home/hhy/下载/VMamba-main/kernels/selective_scan/csrc/selective_scan/cus/selective_scan.cpp' -o '/home/hhy/下载/VMamba-main/kernels/selective_scan/build/temp.linux-x86_64-cpython-310/csrc/selective_scan/cus/selective_scan.o' -O3 -std=c++17 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=selective_scan_cuda_core -D_GLIBCXX_USE_CXX11_ABI=0
  In file included from /home/hhy/下载/VMamba-main/kernels/selective_scan/csrc/selective_scan/cus/selective_scan.cpp:5:
  /home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include/ATen/cuda/CUDAContext.h:5:10: fatal error: cuda_runtime_api.h: 没有那个文件或目录
      5 | #include <cuda_runtime_api.h>
        |          ^~~~~~~~~~~~~~~~~~~~
  compilation terminated.
  [2/3] /home/hhy/anaconda3/envs/mamba/bin/nvcc  '-I/home/hhy/下载/VMamba-main/kernels/selective_scan/csrc/selective_scan' -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include/TH -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include/THC -I/home/hhy/anaconda3/envs/mamba/include -I/home/hhy/anaconda3/envs/mamba/include/python3.10 -c -c '/home/hhy/下载/VMamba-main/kernels/selective_scan/csrc/selective_scan/cus/selective_scan_core_fwd.cu' -o '/home/hhy/下载/VMamba-main/kernels/selective_scan/build/temp.linux-x86_64-cpython-310/csrc/selective_scan/cus/selective_scan_core_fwd.o' -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -std=c++17 -U__CUDA_NO_HALF_OPERATORS__ -U__CUDA_NO_HALF_CONVERSIONS__ -U__CUDA_NO_BFLOAT16_OPERATORS__ -U__CUDA_NO_BFLOAT16_CONVERSIONS__ -U__CUDA_NO_BFLOAT162_OPERATORS__ -U__CUDA_NO_BFLOAT162_CONVERSIONS__ --expt-relaxed-constexpr --expt-extended-lambda --use_fast_math --ptxas-options=-v -lineinfo -gencode arch=compute_70,code=sm_70 -gencode arch=compute_80,code=sm_80 -gencode arch=compute_90,code=sm_90 --threads 4 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=selective_scan_cuda_core -D_GLIBCXX_USE_CXX11_ABI=0
  FAILED: /home/hhy/下载/VMamba-main/kernels/selective_scan/build/temp.linux-x86_64-cpython-310/csrc/selective_scan/cus/selective_scan_core_fwd.o
  /home/hhy/anaconda3/envs/mamba/bin/nvcc  '-I/home/hhy/下载/VMamba-main/kernels/selective_scan/csrc/selective_scan' -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include/TH -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include/THC -I/home/hhy/anaconda3/envs/mamba/include -I/home/hhy/anaconda3/envs/mamba/include/python3.10 -c -c '/home/hhy/下载/VMamba-main/kernels/selective_scan/csrc/selective_scan/cus/selective_scan_core_fwd.cu' -o '/home/hhy/下载/VMamba-main/kernels/selective_scan/build/temp.linux-x86_64-cpython-310/csrc/selective_scan/cus/selective_scan_core_fwd.o' -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -std=c++17 -U__CUDA_NO_HALF_OPERATORS__ -U__CUDA_NO_HALF_CONVERSIONS__ -U__CUDA_NO_BFLOAT16_OPERATORS__ -U__CUDA_NO_BFLOAT16_CONVERSIONS__ -U__CUDA_NO_BFLOAT162_OPERATORS__ -U__CUDA_NO_BFLOAT162_CONVERSIONS__ --expt-relaxed-constexpr --expt-extended-lambda --use_fast_math --ptxas-options=-v -lineinfo -gencode arch=compute_70,code=sm_70 -gencode arch=compute_80,code=sm_80 -gencode arch=compute_90,code=sm_90 --threads 4 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=selective_scan_cuda_core -D_GLIBCXX_USE_CXX11_ABI=0
  cc1plus: fatal error: cuda_runtime.h: 没有那个文件或目录
  compilation terminated.
  cc1plus: fatal error: cuda_runtime.h: 没有那个文件或目录
  compilation terminated.
  cc1plus: fatal error: cuda_runtime.h: 没有那个文件或目录
  compilation terminated.
  cc1plus: fatal error: cuda_runtime.h: 没有那个文件或目录
  compilation terminated.
  [3/3] /home/hhy/anaconda3/envs/mamba/bin/nvcc  '-I/home/hhy/下载/VMamba-main/kernels/selective_scan/csrc/selective_scan' -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include/TH -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include/THC -I/home/hhy/anaconda3/envs/mamba/include -I/home/hhy/anaconda3/envs/mamba/include/python3.10 -c -c '/home/hhy/下载/VMamba-main/kernels/selective_scan/csrc/selective_scan/cus/selective_scan_core_bwd.cu' -o '/home/hhy/下载/VMamba-main/kernels/selective_scan/build/temp.linux-x86_64-cpython-310/csrc/selective_scan/cus/selective_scan_core_bwd.o' -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -std=c++17 -U__CUDA_NO_HALF_OPERATORS__ -U__CUDA_NO_HALF_CONVERSIONS__ -U__CUDA_NO_BFLOAT16_OPERATORS__ -U__CUDA_NO_BFLOAT16_CONVERSIONS__ -U__CUDA_NO_BFLOAT162_OPERATORS__ -U__CUDA_NO_BFLOAT162_CONVERSIONS__ --expt-relaxed-constexpr --expt-extended-lambda --use_fast_math --ptxas-options=-v -lineinfo -gencode arch=compute_70,code=sm_70 -gencode arch=compute_80,code=sm_80 -gencode arch=compute_90,code=sm_90 --threads 4 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=selective_scan_cuda_core -D_GLIBCXX_USE_CXX11_ABI=0
  FAILED: /home/hhy/下载/VMamba-main/kernels/selective_scan/build/temp.linux-x86_64-cpython-310/csrc/selective_scan/cus/selective_scan_core_bwd.o
  /home/hhy/anaconda3/envs/mamba/bin/nvcc  '-I/home/hhy/下载/VMamba-main/kernels/selective_scan/csrc/selective_scan' -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include/TH -I/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/include/THC -I/home/hhy/anaconda3/envs/mamba/include -I/home/hhy/anaconda3/envs/mamba/include/python3.10 -c -c '/home/hhy/下载/VMamba-main/kernels/selective_scan/csrc/selective_scan/cus/selective_scan_core_bwd.cu' -o '/home/hhy/下载/VMamba-main/kernels/selective_scan/build/temp.linux-x86_64-cpython-310/csrc/selective_scan/cus/selective_scan_core_bwd.o' -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -std=c++17 -U__CUDA_NO_HALF_OPERATORS__ -U__CUDA_NO_HALF_CONVERSIONS__ -U__CUDA_NO_BFLOAT16_OPERATORS__ -U__CUDA_NO_BFLOAT16_CONVERSIONS__ -U__CUDA_NO_BFLOAT162_OPERATORS__ -U__CUDA_NO_BFLOAT162_CONVERSIONS__ --expt-relaxed-constexpr --expt-extended-lambda --use_fast_math --ptxas-options=-v -lineinfo -gencode arch=compute_70,code=sm_70 -gencode arch=compute_80,code=sm_80 -gencode arch=compute_90,code=sm_90 --threads 4 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=selective_scan_cuda_core -D_GLIBCXX_USE_CXX11_ABI=0
  cc1plus: fatal error: cuda_runtime.h: 没有那个文件或目录
  compilation terminated.
  cc1plus: fatal error: cuda_runtime.h: 没有那个文件或目录
  compilation terminated.
  cc1plus: fatal error: cuda_runtime.h: 没有那个文件或目录
  compilation terminated.
  cc1plus: fatal error: cuda_runtime.h: 没有那个文件或目录
  compilation terminated.
  ninja: build stopped: subcommand failed.
  Traceback (most recent call last):
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 2100, in _run_ninja_build
      subprocess.run(
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/subprocess.py", line 526, in run
      raise CalledProcessError(retcode, process.args,
  subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.
  
  The above exception was the direct cause of the following exception:
  
  Traceback (most recent call last):
    File "<string>", line 2, in <module>
    File "<pip-setuptools-caller>", line 34, in <module>
    File "/home/hhy/下载/VMamba-main/kernels/selective_scan/setup.py", line 140, in <module>
      setup(
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/setuptools/__init__.py", line 104, in setup
      return distutils.core.setup(**attrs)
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/setuptools/_distutils/core.py", line 184, in setup
      return run_commands(dist)
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/setuptools/_distutils/core.py", line 200, in run_commands
      dist.run_commands()
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 969, in run_commands
      self.run_command(cmd)
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/setuptools/dist.py", line 967, in run_command
      super().run_command(command)
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
      cmd_obj.run()
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/wheel/bdist_wheel.py", line 364, in run
      self.run_command("build")
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/setuptools/_distutils/cmd.py", line 316, in run_command
      self.distribution.run_command(command)
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/setuptools/dist.py", line 967, in run_command
      super().run_command(command)
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
      cmd_obj.run()
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/setuptools/_distutils/command/build.py", line 132, in run
      self.run_command(cmd_name)
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/setuptools/_distutils/cmd.py", line 316, in run_command
      self.distribution.run_command(command)
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/setuptools/dist.py", line 967, in run_command
      super().run_command(command)
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
      cmd_obj.run()
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/setuptools/command/build_ext.py", line 91, in run
      _build_ext.run(self)
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/setuptools/_distutils/command/build_ext.py", line 359, in run
      self.build_extensions()
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 873, in build_extensions
      build_ext.build_extensions(self)
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/setuptools/_distutils/command/build_ext.py", line 479, in build_extensions
      self._build_extensions_serial()
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/setuptools/_distutils/command/build_ext.py", line 505, in _build_extensions_serial
      self.build_extension(ext)
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/setuptools/command/build_ext.py", line 252, in build_extension
      _build_ext.build_extension(self, ext)
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/setuptools/_distutils/command/build_ext.py", line 560, in build_extension
      objects = self.compiler.compile(
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 686, in unix_wrap_ninja_compile
      _write_ninja_file_and_compile_objects(
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1774, in _write_ninja_file_and_compile_objects
      _run_ninja_build(
    File "/home/hhy/anaconda3/envs/mamba/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 2116, in _run_ninja_build
      raise RuntimeError(message) from e
  RuntimeError: Error compiling objects for extension
  [end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for selective_scan
Running setup.py clean for selective_scan
Failed to build selective_scan
ERROR: Could not build wheels for selective_scan, which is required to install pyproject.toml-based projects

make_data_loader.py

Hello, when I only run the BDA task, in the xBD dataset I downloaded, the train folder only contains the images, targets and labels three files, and then I try to change it to the format in the make_data_loader.py of your paper, but it is always wrong, how did you change this part of the code, is there any changed code to provide it, thank you very much!

BCD加载预预训练权重与当前模型不匹配

loading mdel...
Successfully load ckpt /root/pingcc/MambaCD/pretrained_weight/vssm_tiny_0230_ckpt_epoch_262.pth
_IncompatibleKeys(missing_keys=['outnorm0.weight', 'outnorm0.bias', 'outnorm1.weight', 'outnorm1.bias', 'outnorm2.weight', 'outnorm2.bias', 'outnorm3.weight', 'outnorm3.bias'], unexpected_keys=['classifier.norm.weight', 'classifier.norm.bias', 'classifier.head.weight', 'classifier.head.bias', 'layers.2.blocks.4.norm.weight', 'layers.2.blocks.4.norm.bias', 'layers.2.blocks.4.op.x_proj_weight', 'layers.2.blocks.4.op.dt_projs_weight', 'layers.2.blocks.4.op.dt_projs_bias', 'layers.2.blocks.4.op.A_logs', 'layers.2.blocks.4.op.Ds', 'layers.2.blocks.4.op.out_norm.weight', 'layers.2.blocks.4.op.out_norm.bias', 'layers.2.blocks.4.op.in_proj.weight', 'layers.2.blocks.4.op.conv2d.weight', 'layers.2.blocks.4.op.out_proj.weight', 'layers.2.blocks.4.norm2.weight', 'layers.2.blocks.4.norm2.bias', 'layers.2.blocks.4.mlp.fc1.weight', 'layers.2.blocks.4.mlp.fc1.bias', 'layers.2.blocks.4.mlp.fc2.weight', 'layers.2.blocks.4.mlp.fc2.bias'])
Backbone_VSSM load_pretrained

MambaSCD eval is NAN

Training Dataset: SECOND
Environment:

  • Python: 3.12.3
  • torch: 2.2.2+cu118

the first thing was GT_CD is no correct, but i found GT_CD data is not used to calculate eval score. my evaluation result is here
image

datasets resolution ratio

作者你好,使用LEVIR-CD+数据集处理成规定格式时,图像的分辨率是全部是1024x1024还是全部都是256x256或者保留这两种分辨率的

Validation confusion

Clearly, the pre-change image results are divided into 7 categories, resulting in 49 possible changes. Why, then, is the parameter passed into SCDD_eval_all set to 37?

Traceback (most recent call last): File "/home/dmx_bs/MambaCD2/MambaCD/changedetection/script/train_MambaSCD.py", line 259, in <module> main() File "/home/dmx_bs/MambaCD2/MambaCD/changedetection/script/train_MambaSCD.py", line 255, in main trainer.training() File "/home/dmx_bs/MambaCD2/MambaCD/changedetection/script/train_MambaSCD.py", line 145, in training kappa_n0, Fscd, IoU_mean, Sek, oa = self.validation() ^^^^^^^^^^^^^^^^^ File "/home/dmx_bs/MambaCD2/MambaCD/changedetection/script/train_MambaSCD.py", line 203, in validation kappa_n0, Fscd, IoU_mean, Sek = SCDD_eval_all(preds_all, labels_all, 37) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/dmx_bs/MambaCD/changedetection/utils_func/mcd_utils.py", line 209, in SCDD_eval_all assert unique_set.issubset(set([x for x in range(num_class)])), f"unrecogniz ed label number, {unique_set}, {set([x for x in range(num_class)])}" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AssertionError: unrecognized label number, {0, 8, 9, 10, 11, 12, 14, 15, 16, 17, 18, 20, 21, 23, 24, 26, 27, 32, 33, 35, 36, -4, -3}, {0, 1, 2, 3, 4, 5, 6, 7, 8 , 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36}

在SECOND数据集上训练MambaSCD出现的问题

当进入imutils.py脚本中的random_crop_mcd时,似乎变化前和变化后的标签维度是(512, 512, 3),而在这个方法中将其维度视作(512, 512),这样就会导致维度对应不上,请问应该如何修改呢

为什么Encoder模块中代码里面只有VSSBlock层,没有应用下采样层了?

Mamba_backbone.py这个文件应该是Encoder模块,它里面的forward函数layer_forward有两个返回值分别为o和x,为什么最后只应用了经过了VSSBlock的o作为out,而不是x,而且为什么是先通过VSSBlock,不是应该先下采样吗?
`def forward(self, x):
def layer_forward(l, x):
# 这里是先应用blocks, 再应用downsample
x = l.blocks(x)
y = l.downsample(x)
return x, y

    x = self.patch_embed(x)
    outs = []
    # 对应Encoder
    for i, layer in enumerate(self.layers):
        o, x = layer_forward(layer, x)  # (B, H, W, C)
        if i in self.out_indices:
            norm_layer = getattr(self, f"outnorm{i}")
            # 在Downsample后面添加对应的4个NormLayer
            # 最后只用到了结果o?
            out = norm_layer(o)
            if not self.channel_first:
                out = out.permute(0, 3, 1, 2).contiguous()
            outs.append(out)

    if len(self.out_indices) == 0:
        return x

    return outs`

cpu inference

我发现模型的forward和backward都依赖了selective_scan,我想将训练好的模型迁移到没有nvidia gpu的设备上运行,但selective_scan需要有cuda能完成安装,selective_scan有cpu或者pytorch的实现吗?

在SYSU数据集上测验mambaBCD-精度不够

IncompatibleKeys(missing_keys=['outnorm0.weight', 'outnorm0.bias', 'outnorm1.weight', 'outnorm1.bias', 'outnorm2.weight', 'outnorm2.bias', 'outnorm3.weight', 'outnorm3.bias'], unexpected_keys=['classifier.norm.weight', 'classifier.norm.bias', 'classifier.head.weight', 'classifier.head.bias'])
Backbone_VSSM load_pretrained

Request for Trained Weights on xBD Dataset from ChangeMamba Project

Hello,

I am highly interested in your work on "ChangeMamba: Remote Sensing Change Detection with Spatio-Temporal State Space Model" posted on GitHub. I noticed that you have successfully trained the MambaBDA model on the xBD dataset and achieved remarkable results. I am currently engaged in related research and would greatly appreciate the opportunity to test my dataset using your trained weights.

Would it be possible for you to share the weights trained on the xBD dataset? It would be immensely beneficial for my research, and I will ensure to acknowledge and thank your work in my subsequent studies.

Thank you very much for your consideration and assistance!

Best regards,

xczhou

something about "infer_MambaBCD.py"

"Hello, I would like to use the SCD model for prediction, but I only see 'infer_MambaBCD.py' available. Will there be an update to include 'infer_MambaSCD.py'? Thank you very much."

Questions about the calculation of evaluation indices(IOU)

Thank you very much, your work is very meaningful, but I have some questions. Thank you.
image
In the table above, is the IoU of the foreground (building)change class?
But in your code, IoU is calculated like this
image
It doesn't seem to be the IoU of the foreground change class (building), Here it might be minus self.confusion_matrix[1, 1]) instead of adding self.confusion_matrix[1, 1]).

SCD pretrained weights

Hi, I am very interested in this awesome job. I've been doing some work related to semantic change detection recently and I'd like to use your pretrained weights to train on my dataset. Can you please provide me the weights for the semantic change detection model?

FileNotFoundError: No such file: '/media/hhy/Ventoy/xbd/train/images/hurricane-florence_00000263_pre_disaster_pre_disaster.png.png'

Hello, I am trying to execute the BDA task and ran train-MambaBDA.py. The following error occurred. I thought it was a problem with the dataset, but I downloaded the xbd dataset again and still encountered the same problem. Can you help me answer this? In addition, I saw that the hold out data, which is not useful for executing BDA tasks, was used there?
I also encountered the same problem when running the train-Mambascd.py file.

parser.add_argument('--pretrained_weight_path', type=str,default='/home/hhy/下载/MambaCD-master/pretrained_weight/vssm_small_0229_ckpt_epoch_222.pth'
'')

parser.add_argument('--dataset', type=str, default='xBD')
parser.add_argument('--type', type=str, default='train')
parser.add_argument('--train_dataset_path', type=str, default='/media/hhy/Ventoy/xbd/train')
parser.add_argument('--train_data_list_path', type=str, default='/media/hhy/Ventoy/xbd/train/train.txt')
parser.add_argument('--test_dataset_path', type=str, default='/media/hhy/Ventoy/xbd/test')
parser.add_argument('--test_data_list_path', type=str, default='/media/hhy/Ventoy/xbd/test/test.txt')
parser.add_argument('--shuffle', type=bool, default=True)
parser.add_argument('--batch_size', type=int, default=4)
parser.add_argument('--crop_size', type=int, default=256)
parser.add_argument('--train_data_name_list', type=list)
parser.add_argument('--test_data_name_list', type=list)
parser.add_argument('--start_iter', type=int, default=0)
parser.add_argument('--cuda', type=bool, default=True)
parser.add_argument('--max_iters', type=int, default=800000)
parser.add_argument('--model_type', type=str, default='bda——small')
parser.add_argument('--model_param_path', type=str, default='../saved_models')

parser.add_argument('--resume', type=str)
parser.add_argument('--learning_rate', type=float, default=1e-4)
parser.add_argument('--momentum', type=float, default=0.9)
parser.add_argument('--weight_decay', type=float, default=5e-3)

False
0%| | 0/200000 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/home/hhy/下载/MambaCD-master/changedetection/script/train_MambaBDA.py", line 235, in
main()
File "/home/hhy/下载/MambaCD-master/changedetection/script/train_MambaBDA.py", line 231, in main
trainer.training()
File "/home/hhy/下载/MambaCD-master/changedetection/script/train_MambaBDA.py", line 97, in training
itera, data = train_enumerator.next()
File "/home/hhy/anaconda3/envs/mamba/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 652, in next
data = self._next_data()
File "/home/hhy/anaconda3/envs/mamba/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1347, in _next_data
return self._process_data(data)
File "/home/hhy/anaconda3/envs/mamba/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1373, in _process_data
data.reraise()
File "/home/hhy/anaconda3/envs/mamba/lib/python3.9/site-packages/torch/_utils.py", line 461, in reraise
raise exception
FileNotFoundError: Caught FileNotFoundError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/hhy/anaconda3/envs/mamba/lib/python3.9/site-packages/torch/utils/data/_utils/worker.py", line 302, in _worker_loop
data = fetcher.fetch(index)
File "/home/hhy/anaconda3/envs/mamba/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/hhy/anaconda3/envs/mamba/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 49, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/hhy/下载/MambaCD-master/changedetection/datasets/make_data_loader.py", line 189, in getitem
pre_img = self.loader(pre_path)
File "/home/hhy/下载/MambaCD-master/changedetection/datasets/make_data_loader.py", line 14, in img_loader
img = np.array(imageio.imread(path), np.float32)
File "/home/hhy/anaconda3/envs/mamba/lib/python3.9/site-packages/imageio/init.py", line 97, in imread
return imread_v2(uri, format=format, **kwargs)
File "/home/hhy/anaconda3/envs/mamba/lib/python3.9/site-packages/imageio/v2.py", line 359, in imread
with imopen(uri, "ri", **imopen_args) as file:
File "/home/hhy/anaconda3/envs/mamba/lib/python3.9/site-packages/imageio/core/imopen.py", line 113, in imopen
request = Request(uri, io_mode, format_hint=format_hint, extension=extension)
File "/home/hhy/anaconda3/envs/mamba/lib/python3.9/site-packages/imageio/core/request.py", line 247, in init
self._parse_uri(uri)
File "/home/hhy/anaconda3/envs/mamba/lib/python3.9/site-packages/imageio/core/request.py", line 407, in _parse_uri
raise FileNotFoundError("No such file: '%s'" % fn)
FileNotFoundError: No such file: '/media/hhy/Ventoy/xbd/train/images/hurricane-florence_00000263_pre_disaster_pre_disaster.png.png'

cd kernels/selective_scan && pip install . ERROR

error: subprocess-exited-with-error

× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [17 lines of output]

  torch.__version__  = 1.13.0+cu117




  CUDA_HOME = C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.7


  running bdist_wheel
  running build
  running build_ext
  C:\Users\SY\.conda\envs\cd\lib\site-packages\torch\utils\cpp_extension.py:358: UserWarning: Error checking compiler version for cl: [WinError 2] 系统找不到指定的文件。
    warnings.warn(f'Error checking compiler version for {compiler}: {error}')
  building 'selective_scan_cuda_core' extension
  error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
  [end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for selective_scan
Running setup.py clean for selective_scan
Failed to build selective_scan
ERROR: Could not build wheels for selective_scan, which is required to install pyproject.toml-based projects

在SECOND数据集上的复现精度较低

作者您好,感谢分享这份工作!但我在复现MambaSCD_small在SECOND数据集上的实验时,达不到原文中提供的精度指标,sek只有16。超参数均按照原文提供的数值而设置,但在加载预训练权重时,显示:
Successfully load ckpt MambaCD-master/pretrained_weight/vssm_small_0229_ckpt_epoch_222.pth
_IncompatibleKeys(missing_keys=['outnorm0.weight', 'outnorm0.bias', 'outnorm1.weight', 'outnorm1.bias', 'outnorm2.weight', 'outnorm2.bias', 'outnorm3.weight', 'outnorm3.bias'], unexpected_keys=['classifier.norm.weight', 'classifier.norm.bias', 'classifier.head.weight', 'classifier.head.bias'])
能否提供完整预训练权重呢?期待您的回复,十分感谢!

loss=nan

您好,我直接用的WHU-CD 256的数据集,我的机子目前跑不动1024的,跑出来精度很差,并且极容易出现loss为nan的情况,您有什么建议吗,谢谢!

something about "infer_Mamba.SCD.py"

Hello, I would like to use the BDA model for prediction, but I only see 'infer_MambaBCD.py' ,'infer_MambaSCD.py 'available. Will there be an update to include 'infer_MambaBDA.py'? Thank you very much."

MambaBDA-Tiny on the [xBD]

Hi! Congratulations on your great work! I am wondering what happened to MambaBDA-Tiny on the [xBD] pretrained weights?

dataset category

Hello,
I'm very interested in your work and I have some questions for you while reading the code. Regarding the data set categories used in the article and the number of categories in the ADE data set, and I see that the number of categories in the code is also 150, do I need to change the type and number of categories in other locations when using it?
Thanks!

CUDA version requirements

hello!

First of all, I'd like to express my sincere thanks for your great work on the MambaCD project. This project has been of great help to my research and work, for which I am very appreciative and grateful.

While trying to install and use the project, I noticed that the installation process requires a CUDA version of at least 11.6. Due to hardware and other dependency limitations, the CUDA version installed in my environment can only be up to 11.4.

Therefore, I would like to ask a few questions:

  1. Does the project absolutely require CUDA 11.6 and above to run properly?
  2. Do you have a suggested solution or alternative for environments that only support CUDA 11.4? For example, is it possible to modify the configuration or source code to be compatible with CUDA 11.4?
  3. If the code needs to be adjusted to support CUDA 11.4, can you provide some advice or guidance to help me achieve this?

I'd really like to have this work successfully in my environment. Any advice and help on how to resolve this compatibility issue would be extremely valuable and appreciated.

Very much looking forward to your reply!

best wishes!

I encounter a question!

Hello, author. When I run the BCD code, I encounter an error: torch.cuda.OutOfMemoryError: CUDA out of memory. So, I wanted to ask how much GPU memory is required to run this part of the code?

model weights of BDA

Hi, Thanks for your great contribution on building damage assessment community. I noticed that you release some model weights of BCD and SCD, with the absence of BDA weights. I've reproduced the small version of MambaBDA (batch size is set as 8 due to limited GPU memory), and the highest F1_oa is only up to $80$% $(\pm0.3)$. Could you release the model weights of MambaBDA? 😄

batchsize和iters以及epoch问题

想问一下作者是使用什么机器进行的实验,我使用单卡4090在bcd任务上跑SYSU数据集的话,batchsize只能设置为8,那么max_iters是不是应该改为640000(论文中训练iters是设置为20000,readme提供的命令行中bs=16,max_iters=320000),这样才能和epoch对应起来,还是说我只需要跟论文中的iters=20000对应起来即可,这样的话我就设置max_iters=40000,相比起来能缩短十几倍计算成本

What datasets pretrained weights trained on?

Hi, I wonder which data you used to pretrain the weights: VMamba-Tiny, VMamba-Small, and VMamba-Base? I download the original weights of Vmamba(same version of Mamba_Small) to train and get the relative lower accuracy on same dataset, compared to pretrained weights you provided. 😞
I want to generate new pretrained weights on a new mamba and I will be appreciated if you introduce it. 😃

I have a bug. NameError: name 'selective_scan_cuda_oflex' is not defined,Can you help me solve it?

#!/bin/bash

python /home/ubuntu20/Desktop/MambaCD1/MambaCD/script/train_MambaBCD.py
--dataset 'LEVIR-CD+'
--batch_size 16
--crop_size 256
--max_iters 320000
--model_type MambaBCD_Small
--model_param_path '/home/ubuntu20/Desktop/MambaCD1/MambaCD/changedetection/saved_models'
--train_dataset_path '/home/ubuntu20/Desktop/train'
--train_data_list_path '/home/ubuntu20/Desktop/train/train.txt'
--test_dataset_path '/home/ubuntu20/Desktop/train'
--test_data_list_path '/home/ubuntu20/Desktop/train/train.txt'
--cfg '/home/ubuntu20/Desktop/MambaCD1/MambaCD/changedetection/configs/vssm1/vssm_small_224.yaml'
--pretrained_weight_path '/home/ubuntu20/Desktop/MambaCD1/MambaCD/pretrained_weight/vssm_small_0229_ckpt_epoch_222.pth'

This is my main file

(mamba) ubuntu20@ubuntu20-System-Product-Name:/Desktop/MambaCD1/MambaCD/changedetection$ sh main.sh
=> merge config from /home/ubuntu20/Desktop/MambaCD1/MambaCD/changedetection/configs/vssm1/vssm_small_224.yaml
Successfully load ckpt /home/ubuntu20/Desktop/MambaCD1/MambaCD/pretrained_weight/vssm_small_0229_ckpt_epoch_222.pth
_IncompatibleKeys(missing_keys=['outnorm0.weight', 'outnorm0.bias', 'outnorm1.weight', 'outnorm1.bias', 'outnorm2.weight', 'outnorm2.bias', 'outnorm3.weight', 'outnorm3.bias'], unexpected_keys=['classifier.norm.weight', 'classifier.norm.bias', 'classifier.head.weight', 'classifier.head.bias'])
0%| | 0/20000 [00:16<?, ?it/s]
Traceback (most recent call last):
File "/home/ubuntu20/Desktop/MambaCD1/MambaCD/changedetection/script/train_MambaBCD.py", line 207, in
main()
File "/home/ubuntu20/Desktop/MambaCD1/MambaCD/changedetection/script/train_MambaBCD.py", line 203, in main
trainer.training()
File "/home/ubuntu20/Desktop/MambaCD1/MambaCD/changedetection/script/train_MambaBCD.py", line 104, in training
output_1 = self.deep_model(pre_change_imgs, post_change_imgs)
File "/media/ubuntu20/EXOS_1/anaconda3/envs/mamba/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ubuntu20/Desktop/MambaCD1/MambaCD/changedetection/models/MambaBCD.py", line 67, in forward
pre_features = self.encoder(pre_data)
File "/media/ubuntu20/EXOS_1/anaconda3/envs/mamba/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ubuntu20/Desktop/MambaCD1/MambaCD/changedetection/models/Mamba_backbone.py", line 50, in forward
o, x = layer_forward(layer, x) # (B, H, W, C)
File "/home/ubuntu20/Desktop/MambaCD1/MambaCD/changedetection/models/Mamba_backbone.py", line 43, in layer_forward
x = l.blocks(x)
File "/media/ubuntu20/EXOS_1/anaconda3/envs/mamba/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/media/ubuntu20/EXOS_1/anaconda3/envs/mamba/lib/python3.8/site-packages/torch/nn/modules/container.py", line 139, in forward
input = module(input)
File "/media/ubuntu20/EXOS_1/anaconda3/envs/mamba/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ubuntu20/Desktop/MambaCD1/MambaCD/classification/models/vmamba.py", line 1360, in forward
return self._forward(input)
File "/home/ubuntu20/Desktop/MambaCD1/MambaCD/classification/models/vmamba.py", line 1348, in _forward
x = input + self.drop_path(self.op(self.norm(input)))
File "/media/ubuntu20/EXOS_1/anaconda3/envs/mamba/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ubuntu20/Desktop/MambaCD1/MambaCD/classification/models/vmamba.py", line 1147, in forwardv2
y = self.forward_core(x)
File "/home/ubuntu20/Desktop/MambaCD1/MambaCD/classification/models/vmamba.py", line 1124, in forward_corev2
return cross_selective_scan(
File "/home/ubuntu20/Desktop/MambaCD1/MambaCD/classification/models/vmamba.py", line 406, in cross_selective_scan
ys: torch.Tensor = selective_scan(
File "/home/ubuntu20/Desktop/MambaCD1/MambaCD/classification/models/vmamba.py", line 372, in selective_scan
return SelectiveScan.apply(u, delta, A, B, C, D, delta_bias, delta_softplus, nrows, backnrows, ssoflex)
File "/media/ubuntu20/EXOS_1/anaconda3/envs/mamba/lib/python3.8/site-packages/torch/cuda/amp/autocast_mode.py", line 110, in decorate_fwd
return fwd(*args, **kwargs)
File "/home/ubuntu20/Desktop/MambaCD1/MambaCD/classification/models/vmamba.py", line 299, in forward
out, x, *rest = selective_scan_cuda_oflex.fwd(u, delta, A, B, C, D, delta_bias, delta_softplus, 1, oflex)
NameError: name 'selective_scan_cuda_oflex' is not defined
(mamba) ubuntu20@ubuntu20-System-Product-Name:
/Desktop/MambaCD1/MambaCD/changedetection$

MambaBDA训练时候报错:IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)

Hello, when I was training mamba-BDA-Tiny, the training results were good, but in the 10500 round of evaluation, I made an error: IndexError: Dimension out of range (expected to be in range of [-1,0], but got 1). I printed the probas.shape correctly afterwards. Is there any way to modify this situation.

11%|█         | 10965/100000 [10:43:56<87:08:49,  3.52s/it]
Traceback (most recent call last):
  File "/home/wg/MambaCD-master/changedetection/script/train_MambaBDA.py", line 236, in
    main()
  File "/home/wg/MambaCD-master/changedetection/script/train_MambaBDA.py", line 232, in main
    trainer.training()
  File "/home/wg/MambaCD-master/changedetection/script/train_MambaBDA.py", line 120, in training
    lovasz_loss_clf = L.lovasz_softmax(F.softmax(output_clf, dim=1), labels_clf, ignore=255)
  File "/home/wg/MambaCD-master/changedetection/utils_func/lovasz_loss.py", line 167, in lovasz_softmax
    loss = lovasz_softmax_flat(*flatten_probas(probas, labels, ignore), classes=classes)
  File "/home/wg/MambaCD-master/changedetection/utils_func/lovasz_loss.py", line 181, in lovasz_softmax_flat
    C = probas.size(1)
IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)

Total number of epochs

Could you please tell me what are the approximate epochs of the three datasets in the BCD task?

dataset

我想问一下dataset数据集的时候,SCD数据集SECOND里面的GT_CD源数据集没有啊。是要自己生成吗?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.