Giter VIP home page Giter VIP logo

pytorch_sparse's Introduction

PyTorch Sparse

PyPI Version Testing Status Linting Status Code Coverage


This package consists of a small extension library of optimized sparse matrix operations with autograd support. This package currently consists of the following methods:

All included operations work on varying data types and are implemented both for CPU and GPU. To avoid the hazzle of creating torch.sparse_coo_tensor, this package defines operations on sparse tensors by simply passing index and value tensors as arguments (with same shapes as defined in PyTorch). Note that only value comes with autograd support, as index is discrete and therefore not differentiable.

Installation

Anaconda

Update: You can now install pytorch-sparse via Anaconda for all major OS/PyTorch/CUDA combinations ๐Ÿค— Given that you have pytorch >= 1.8.0 installed, simply run

conda install pytorch-sparse -c pyg

Binaries

We alternatively provide pip wheels for all major OS/PyTorch/CUDA combinations, see here.

PyTorch 2.3

To install the binaries for PyTorch 2.3.0, simply run

pip install torch-scatter torch-sparse -f https://data.pyg.org/whl/torch-2.3.0+${CUDA}.html

where ${CUDA} should be replaced by either cpu, cu118, or cu121 depending on your PyTorch installation.

cpu cu118 cu121
Linux โœ… โœ… โœ…
Windows โœ… โœ… โœ…
macOS โœ…

PyTorch 2.2

To install the binaries for PyTorch 2.2.0, simply run

pip install torch-scatter torch-sparse -f https://data.pyg.org/whl/torch-2.2.0+${CUDA}.html

where ${CUDA} should be replaced by either cpu, cu118, or cu121 depending on your PyTorch installation.

cpu cu118 cu121
Linux โœ… โœ… โœ…
Windows โœ… โœ… โœ…
macOS โœ…

Note: Binaries of older versions are also provided for PyTorch 1.4.0, PyTorch 1.5.0, PyTorch 1.6.0, PyTorch 1.7.0/1.7.1, PyTorch 1.8.0/1.8.1, PyTorch 1.9.0, PyTorch 1.10.0/1.10.1/1.10.2, PyTorch 1.11.0, PyTorch 1.12.0/1.12.1, PyTorch 1.13.0/1.13.1, PyTorch 2.0.0/2.0.1, PyTorch 2.1.0/2.1.1/2.1.2 (following the same procedure). For older versions, you need to explicitly specify the latest supported version number or install via pip install --no-index in order to prevent a manual installation from source. You can look up the latest supported version number here.

From source

Ensure that at least PyTorch 1.7.0 is installed and verify that cuda/bin and cuda/include are in your $PATH and $CPATH respectively, e.g.:

$ python -c "import torch; print(torch.__version__)"
>>> 1.7.0

$ echo $PATH
>>> /usr/local/cuda/bin:...

$ echo $CPATH
>>> /usr/local/cuda/include:...

If you want to additionally build torch-sparse with METIS support, e.g. for partioning, please download and install the METIS library by following the instructions in the Install.txt file. Note that METIS needs to be installed with 64 bit IDXTYPEWIDTH by changing include/metis.h. Afterwards, set the environment variable WITH_METIS=1.

Then run:

pip install torch-scatter torch-sparse

When running in a docker container without NVIDIA driver, PyTorch needs to evaluate the compute capabilities and may fail. In this case, ensure that the compute capabilities are set via TORCH_CUDA_ARCH_LIST, e.g.:

export TORCH_CUDA_ARCH_LIST="6.0 6.1 7.2+PTX 7.5+PTX"

Functions

Coalesce

torch_sparse.coalesce(index, value, m, n, op="add") -> (torch.LongTensor, torch.Tensor)

Row-wise sorts index and removes duplicate entries. Duplicate entries are removed by scattering them together. For scattering, any operation of torch_scatter can be used.

Parameters

  • index (LongTensor) - The index tensor of sparse matrix.
  • value (Tensor) - The value tensor of sparse matrix.
  • m (int) - The first dimension of sparse matrix.
  • n (int) - The second dimension of sparse matrix.
  • op (string, optional) - The scatter operation to use. (default: "add")

Returns

  • index (LongTensor) - The coalesced index tensor of sparse matrix.
  • value (Tensor) - The coalesced value tensor of sparse matrix.

Example

import torch
from torch_sparse import coalesce

index = torch.tensor([[1, 0, 1, 0, 2, 1],
                      [0, 1, 1, 1, 0, 0]])
value = torch.Tensor([[1, 2], [2, 3], [3, 4], [4, 5], [5, 6], [6, 7]])

index, value = coalesce(index, value, m=3, n=2)
print(index)
tensor([[0, 1, 1, 2],
        [1, 0, 1, 0]])
print(value)
tensor([[6.0, 8.0],
        [7.0, 9.0],
        [3.0, 4.0],
        [5.0, 6.0]])

Transpose

torch_sparse.transpose(index, value, m, n) -> (torch.LongTensor, torch.Tensor)

Transposes dimensions 0 and 1 of a sparse matrix.

Parameters

  • index (LongTensor) - The index tensor of sparse matrix.
  • value (Tensor) - The value tensor of sparse matrix.
  • m (int) - The first dimension of sparse matrix.
  • n (int) - The second dimension of sparse matrix.
  • coalesced (bool, optional) - If set to False, will not coalesce the output. (default: True)

Returns

  • index (LongTensor) - The transposed index tensor of sparse matrix.
  • value (Tensor) - The transposed value tensor of sparse matrix.

Example

import torch
from torch_sparse import transpose

index = torch.tensor([[1, 0, 1, 0, 2, 1],
                      [0, 1, 1, 1, 0, 0]])
value = torch.Tensor([[1, 2], [2, 3], [3, 4], [4, 5], [5, 6], [6, 7]])

index, value = transpose(index, value, 3, 2)
print(index)
tensor([[0, 0, 1, 1],
        [1, 2, 0, 1]])
print(value)
tensor([[7.0, 9.0],
        [5.0, 6.0],
        [6.0, 8.0],
        [3.0, 4.0]])

Sparse Dense Matrix Multiplication

torch_sparse.spmm(index, value, m, n, matrix) -> torch.Tensor

Matrix product of a sparse matrix with a dense matrix.

Parameters

  • index (LongTensor) - The index tensor of sparse matrix.
  • value (Tensor) - The value tensor of sparse matrix.
  • m (int) - The first dimension of sparse matrix.
  • n (int) - The second dimension of sparse matrix.
  • matrix (Tensor) - The dense matrix.

Returns

  • out (Tensor) - The dense output matrix.

Example

import torch
from torch_sparse import spmm

index = torch.tensor([[0, 0, 1, 2, 2],
                      [0, 2, 1, 0, 1]])
value = torch.Tensor([1, 2, 4, 1, 3])
matrix = torch.Tensor([[1, 4], [2, 5], [3, 6]])

out = spmm(index, value, 3, 3, matrix)
print(out)
tensor([[7.0, 16.0],
        [8.0, 20.0],
        [7.0, 19.0]])

Sparse Sparse Matrix Multiplication

torch_sparse.spspmm(indexA, valueA, indexB, valueB, m, k, n) -> (torch.LongTensor, torch.Tensor)

Matrix product of two sparse tensors. Both input sparse matrices need to be coalesced (use the coalesced attribute to force).

Parameters

  • indexA (LongTensor) - The index tensor of first sparse matrix.
  • valueA (Tensor) - The value tensor of first sparse matrix.
  • indexB (LongTensor) - The index tensor of second sparse matrix.
  • valueB (Tensor) - The value tensor of second sparse matrix.
  • m (int) - The first dimension of first sparse matrix.
  • k (int) - The second dimension of first sparse matrix and first dimension of second sparse matrix.
  • n (int) - The second dimension of second sparse matrix.
  • coalesced (bool, optional): If set to True, will coalesce both input sparse matrices. (default: False)

Returns

  • index (LongTensor) - The output index tensor of sparse matrix.
  • value (Tensor) - The output value tensor of sparse matrix.

Example

import torch
from torch_sparse import spspmm

indexA = torch.tensor([[0, 0, 1, 2, 2], [1, 2, 0, 0, 1]])
valueA = torch.Tensor([1, 2, 3, 4, 5])

indexB = torch.tensor([[0, 2], [1, 0]])
valueB = torch.Tensor([2, 4])

indexC, valueC = spspmm(indexA, valueA, indexB, valueB, 3, 3, 2)
print(indexC)
tensor([[0, 1, 2],
        [0, 1, 1]])
print(valueC)
tensor([8.0, 6.0, 8.0])

Running tests

pytest

C++ API

torch-sparse also offers a C++ API that contains C++ equivalent of python models. For this, we need to add TorchLib to the -DCMAKE_PREFIX_PATH (e.g., it may exists in {CONDA}/lib/python{X.X}/site-packages/torch if installed via conda):

mkdir build
cd build
# Add -DWITH_CUDA=on support for CUDA support
cmake -DCMAKE_PREFIX_PATH="..." ..
make
make install

pytorch_sparse's People

Contributors

adam1679 avatar agarwalsaurav avatar antoineprv avatar bwdeng20 avatar chantat avatar damianszwichtenberg avatar dfalbel avatar dkbhaskaran avatar ekagra-ranjan avatar james77777778 avatar kgajdamo avatar lgray avatar mariogeiger avatar miaoneng avatar mpariente avatar nistath avatar nripeshn avatar olhababicheva avatar padarn avatar rexying avatar romeov avatar rusty1s avatar seanliu96 avatar shagunsodhani avatar shi27feng avatar wang-ps avatar yanbing-j avatar yaoyaowd avatar zenotan avatar zhenghongming888 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pytorch_sparse's Issues

Can metis be used in weighted graph ?

Hi, I wonder whether metis in pytorch_sparse can be used in a weighted graph, and when I read code metis.py, I can not find anything about edge_attribute, while the cluster_gcn of pytorch_geometric has write the code about edge_attribute in data/cluster.py

undefined symbol: cusparseScsrgemm

Hi,

I'm trying to get the torch-geometry package to work and everything installs just fine.

But I'm getting following error during import from torch_sparse:

Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/home/philipp/.local/share/virtualenvs/pytorch_geometric-4vUoAfdD/lib/python3.6/site-packages/torch_sparse/__init__.py", line 4, in <module>
    from .spspmm import spspmm
  File "/home/philipp/.local/share/virtualenvs/pytorch_geometric-4vUoAfdD/lib/python3.6/site-packages/torch_sparse/spspmm.py", line 7, in <module>
    import spspmm_cuda
ImportError: /home/philipp/.local/share/virtualenvs/pytorch_geometric-4vUoAfdD/lib/python3.6/site-packages/spspmm_cuda.cpython-36m-x86_64-linux-gnu.so: undefined symbol: cusparseScsrgemm

For me that can be reproduced with:

#!/bin/bash
pipenv --python 3.6
export PATH=/opt/cuda/bin:$PATH
export CPATH=/opt/cuda/include
pipenv install http://download.pytorch.org/whl/cu92/torch-0.4.1-cp36-cp36m-linux_x86_64.whl
pipenv install torchvision
pipenv install cffi
pipenv install torch-scatter
pipenv install torch-sparse
# pipenv install torch-cluster
# pipenv install torch-spline-conv
# pipenv install torch-geometric
pipenv run python -c "from torch_sparse import spmm"

More info:

$ pipenv run python -c "import torch; print(torch.__version__)"
0.4.1

I'm not sure if this is a problem on my side or not. Would appreciate your help.

Should coalesce sort the value tensor?

I am wonder why the value tensor has not been sorted alongside the index tensor when using coalesce and is this intentional?

My input to coalesce is edge_index and edgebatch which both for example are:

edge_index : tensor([[34, 68, 35, 69], [35, 69, 34, 68]], device='cuda:0')

edgebatch : tensor([34, 67, 67, 34], device='cuda:0')

Then I call coalesce in the following way:
edge_index, edgebatch = coalesce(edge_index, edgebatch, num_nodes, num_nodes)

The output for edge_index is as I expected:
edge_index : tensor([[34, 35, 68, 69], [35, 34, 69, 68]], device='cuda:0')

but the output for edgebatch is not ordered alongside edge_index:
edgebatch : tensor([34, 67, 67, 34], device='cuda:0')

Is this intentional or not. Also is it possible to have an option to also sort the value tensor?

New Feature Request: SparseTensor `+(*,-)` SparseTensor

Hi, Matthias Fey! Do you have plan to support basic element-wise arithmetic such as +,- and * for two SparseTensors with a same shape. Besides, as some element-wise operations, e.g., add_nnz_ and mul_nnz are already implemented through modifying SparseTensor.storage._value, I think other operations such as log, cos and abs can be, with little difficulty, implemented in a similar way . Is there any plan about them?

Different CUDA versions than PyTorch

I'm trying to run PyTorch Geometric in google colab, and I installed all needed libraries using:

!pip install --upgrade torch-scatter
!pip install --upgrade torch-sparse
!pip install --upgrade torch-cluster
!pip install --upgrade torch-spline-conv 
!pip install torch-geometric

but I got this error:

RuntimeError: Detected that PyTorch and torch_sparse were compiled with different CUDA versions. PyTorch has CUDA version 10.1 and torch_sparse has CUDA version 10.0. Please reinstall the torch_sparse that matches your PyTorch install.

I tried to search and I found this suggested solution, but it didn't work for me:

!pip install torch-scatter==latest+cu101 torch-sparse==latest+cu101 -f https://s3.eu-central-1.amazonaws.com/pytorch-geometric.com/whl/torch-1.4.0.html

Any help, please!

RuntimeError: CUDA error: an illegal memory access was encountered

  File "examples/sem_seg_sparse/train.py", line 142, in <module>
    main()
  File "examples/sem_seg_sparse/train.py", line 61, in main
    train(model, train_loader, optimizer, scheduler, criterion, opt)
  File "examples/sem_seg_sparse/train.py", line 79, in train
    out = model(data)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/content/drive/My Drive/deep_gcns_torch/examples/sem_seg_sparse/architecture.py", line 69, in forward
    feats.append(self.gunet(feats[-1],edge_index=edge_index ,batch=batch))
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/torch_geometric/nn/models/graph_unet.py", line 83, in forward
    x.size(0))
  File "/usr/local/lib/python3.6/dist-packages/torch_geometric/nn/models/graph_unet.py", line 120, in augment_adj
    num_nodes)
  File "/usr/local/lib/python3.6/dist-packages/torch_sparse/spspmm.py", line 30, in spspmm
    C = matmul(A, B)
  File "/usr/local/lib/python3.6/dist-packages/torch_sparse/matmul.py", line 107, in matmul
    return spspmm(src, other, reduce)
  File "/usr/local/lib/python3.6/dist-packages/torch_sparse/matmul.py", line 95, in spspmm
    return spspmm_sum(src, other)
  File "/usr/local/lib/python3.6/dist-packages/torch_sparse/matmul.py", line 83, in spspmm_sum
    rowptrA, colA, valueA, rowptrB, colB, valueB, K)
RuntimeError: CUDA error: an illegal memory access was encountered (launch_kernel at /pytorch/aten/src/ATen/native/cuda/Loops.cuh:103)

hi, i'm intergrating the GraphU-Net and other model on the google colab, but there are some bug , could you help me ? thanks.

install without GPU

Is it possible to install for use only with CPU?
I get the following error on installation:

$ pip install torch-sparse
Collecting torch-sparse
  Using cached https://files.pythonhosted.org/packages/b0/0a/2ff678e0d04e524dd2cf990a6202ced8c0ffe3fe6b08e02f25cc9fd27da0/torch_sparse-0.4.0.tar.gz
Requirement already satisfied: scipy in /home/pete/miniconda3/envs/pinet2/lib/python3.6/site-packages (from torch-sparse) (1.3.0)
Requirement already satisfied: numpy>=1.13.3 in /home/pete/miniconda3/envs/pinet2/lib/python3.6/site-packages (from scipy->torch-sparse) (1.16.4)
Building wheels for collected packages: torch-sparse
  Building wheel for torch-sparse (setup.py) ... error
  ERROR: Complete output from command /home/pete/miniconda3/envs/pinet2/bin/python -u -c 'import setuptools, tokenize;__file__='"'"'/tmp/pip-install-mizghsjn/torch-sparse/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-v_tqug6l --python-tag cp36:
  ERROR: No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda'
  running bdist_wheel
  running build
  running build_py
  creating build
  creating build/lib.linux-x86_64-3.6
  creating build/lib.linux-x86_64-3.6/torch_sparse
  copying torch_sparse/coalesce.py -> build/lib.linux-x86_64-3.6/torch_sparse
  copying torch_sparse/eye.py -> build/lib.linux-x86_64-3.6/torch_sparse
  copying torch_sparse/spmm.py -> build/lib.linux-x86_64-3.6/torch_sparse
  copying torch_sparse/convert.py -> build/lib.linux-x86_64-3.6/torch_sparse
  copying torch_sparse/__init__.py -> build/lib.linux-x86_64-3.6/torch_sparse
  copying torch_sparse/transpose.py -> build/lib.linux-x86_64-3.6/torch_sparse
  copying torch_sparse/spspmm.py -> build/lib.linux-x86_64-3.6/torch_sparse
  creating build/lib.linux-x86_64-3.6/test
  copying test/utils.py -> build/lib.linux-x86_64-3.6/test
  copying test/test_eye.py -> build/lib.linux-x86_64-3.6/test
  copying test/test_spspmm_spmm.py -> build/lib.linux-x86_64-3.6/test
  copying test/test_spspmm.py -> build/lib.linux-x86_64-3.6/test
  copying test/test_convert.py -> build/lib.linux-x86_64-3.6/test
  copying test/test_coalesce.py -> build/lib.linux-x86_64-3.6/test
  copying test/test_transpose.py -> build/lib.linux-x86_64-3.6/test
  copying test/__init__.py -> build/lib.linux-x86_64-3.6/test
  copying test/test_spmm.py -> build/lib.linux-x86_64-3.6/test
  creating build/lib.linux-x86_64-3.6/torch_sparse/utils
  copying torch_sparse/utils/unique.py -> build/lib.linux-x86_64-3.6/torch_sparse/utils
  copying torch_sparse/utils/__init__.py -> build/lib.linux-x86_64-3.6/torch_sparse/utils
  running build_ext
  building 'torch_sparse.spspmm_cpu' extension
  creating build/temp.linux-x86_64-3.6
  creating build/temp.linux-x86_64-3.6/cpu
  gcc -pthread -B /home/pete/miniconda3/envs/pinet2/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/pete/miniconda3/envs/pinet2/lib/python3.6/site-packages/torch/include -I/home/pete/miniconda3/envs/pinet2/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/pete/miniconda3/envs/pinet2/lib/python3.6/site-packages/torch/include/TH -I/home/pete/miniconda3/envs/pinet2/lib/python3.6/site-packages/torch/include/THC -I/home/pete/miniconda3/envs/pinet2/include/python3.6m -c cpu/spspmm.cpp -o build/temp.linux-x86_64-3.6/cpu/spspmm.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=spspmm_cpu -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
  cc1plus: warning: command line option โ€˜-Wstrict-prototypesโ€™ is valid for C/ObjC but not for C++
  g++ -pthread -shared -B /home/pete/miniconda3/envs/pinet2/compiler_compat -L/home/pete/miniconda3/envs/pinet2/lib -Wl,-rpath=/home/pete/miniconda3/envs/pinet2/lib -Wl,--no-as-needed -Wl,--sysroot=/ build/temp.linux-x86_64-3.6/cpu/spspmm.o -o build/lib.linux-x86_64-3.6/torch_sparse/spspmm_cpu.cpython-36m-x86_64-linux-gnu.so
  building 'torch_sparse.spspmm_cuda' extension
  creating build/temp.linux-x86_64-3.6/cuda
  gcc -pthread -B /home/pete/miniconda3/envs/pinet2/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/pete/miniconda3/envs/pinet2/lib/python3.6/site-packages/torch/include -I/home/pete/miniconda3/envs/pinet2/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/pete/miniconda3/envs/pinet2/lib/python3.6/site-packages/torch/include/TH -I/home/pete/miniconda3/envs/pinet2/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/pete/miniconda3/envs/pinet2/include/python3.6m -c cuda/spspmm.cpp -o build/temp.linux-x86_64-3.6/cuda/spspmm.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=spspmm_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
  cc1plus: warning: command line option โ€˜-Wstrict-prototypesโ€™ is valid for C/ObjC but not for C++
  /usr/local/cuda/bin/nvcc -I/home/pete/miniconda3/envs/pinet2/lib/python3.6/site-packages/torch/include -I/home/pete/miniconda3/envs/pinet2/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/pete/miniconda3/envs/pinet2/lib/python3.6/site-packages/torch/include/TH -I/home/pete/miniconda3/envs/pinet2/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/pete/miniconda3/envs/pinet2/include/python3.6m -c cuda/spspmm_kernel.cu -o build/temp.linux-x86_64-3.6/cuda/spspmm_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --compiler-options '-fPIC' -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=spspmm_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
  unable to execute '/usr/local/cuda/bin/nvcc': No such file or directory
  error: command '/usr/local/cuda/bin/nvcc' failed with exit status 1
  ----------------------------------------
  ERROR: Failed building wheel for torch-sparse
  Running setup.py clean for torch-sparse
Failed to build torch-sparse

I have already install pytorch:

$ python -c "import torch; print(torch.__version__)" 
1.1.0

I don't have CUDA since I don't have a GPU, but the readme states that "All included operations work on varying data types and are implemented both for CPU and GPU." So how can I use this only with CPU?

I also get the same error for torch-scatter.

Any help appreciated.

fetal error while pip

When I use pip to install this package, I got the following error:

cpu/scatter.cpp:1:29: fatal error: torch/extension.h: No such file or directory compilation terminated. error: command 'gcc' failed with exit status 1

Anyone know how to solve it?

Install error

Same issue is found in the torch-scatter package:
Ubuntu 18.04
Cuda 9.0.176
Torch 1.1 (matching Cuda)

/usr/include/c++/6/type_traits:1558:8: note: provided for โ€˜template<class _From, class _To> struct std::is_convertibleโ€™ struct is_convertible ^~~~~~~~~~~~~~ /usr/include/c++/6/tuple:502:1: error: body of constexpr function โ€˜static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor, at::Tensor, double, long int>&&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, double, long int}]โ€™ not a return-statement } ^ error: command '/usr/local/cuda/bin/nvcc' failed with exit status 1

Compatibility between torch.Tensor and SparseTensor

Hey,
how are your plans regarding SparseTensor? There seem to be some incompatibilities between the sparse and dense tensors that may be avoidable. For example it would be nice if

  • SparseTensor.size() would work without arguments and return a torch.Size
  • SparseTensor.{dtype,device} were properties instead of methods (then you could use a sparse tensor directly as an options object)
  • sparse tensors could be added to other sparse tensors
    I wanted to create a PR but noticed that at least the first two points seem to stem from the fact that the whole of SparseTensor is a torch.jit.script. Why is that? Does it give a significant performance boost? Do you export SparseTensor into no-python environments? Is there a way to make sparse and dense tensors compatible while also keeping the benefits from torch.jit.script?

I have recently switched over from the built-in sparse tensors because your spmm is very fast and I would like to contribute to make the points in my list possible. Are these things compatible with your goals and should I give it a go? If so, which approach would you prefer/how would you proceed?

Best,
Marten

Indexing with arbitrary iterables

Support for indexing with arbitrary lists of indices would be very nice, like it is supported by PyTorch, Numpy, scipy.sparse, etc.

Also, the line

index = list(index) if isinstance(index, tuple) else [index]

forces the usage of a tuple for indexing. It would be nice to make this more general and support e.g. a list.

Does spspmm operation support autograd?

Hi, you say autograd is supported for values tensors, but it seems it doesn't work in spspmm.

Like this:

indexA = torch.tensor([[0, 0, 1, 2, 2], [1, 2, 0, 0, 1]])
valueA = torch.tensor([1, 2.0, 3, 4, 5], requires_grad=True)
indexB = torch.tensor([[0, 2], [1, 0]])
valueB = torch.tensor([2, 4.0], requires_grad=True)
indexC, valueC = torch_sparse.spspmm(indexA, valueA, indexB, valueB, 3, 3, 2)

print(valueC.requires_grad)
print(valueC.grad_fn)

And the answer is:

False
None

In my case, I want to parameterize the sparse adjacent matrix and feature matrix in GCN, so the inputs need to be both differentiable. I wonder if there're some bugs or just the way it is.

Regards.

Derivate Not Implemented Error

I am trying to use spmm with a sparse tensor that's coming from the result of spspmm. This produces the following error.

  File "/anaconda3/envs/hep/lib/python3.6/site-packages/torch_sparse/spmm.py", line 21, in spmm
    out = scatter_add(out, row, dim=0, dim_size=m)
  File "/anaconda3/envs/hep/lib/python3.6/site-packages/torch_scatter/add.py", line 73, in scatter_add
    return out.scatter_add_(dim, index, src)
RuntimeError: the derivative for 'index' is not implemented

I couldn't figure this out but I see requires_grad=True in the indices tensor of the spspmm output. Could this be the reason and is there a way to set it off?

Wrong density of SparseTensor obtained through slice operation

>>>from torch_sparse import SparseTensor
>>>A=tensor([[0., 0., 1., 0.],
        [0., 0., 0., 0.],
        [0., 0., 0., 3.],
        [0., 0., 2., 0.]], dtype=torch.float64)
>>>Ats=SparseTensor.from_dense(A)
>>>print(Ats[:,1:2])
SparseTensor(row=tensor([2]),
             col=tensor([0]),
             val=tensor([0.], dtype=torch.float64),
             size=(4, 1), nnz=1, density=25.00%)
>>>print(Ats[:,1:2].to_dense())
tensor([[0.],
        [0.],
        [0.],
        [0.]], dtype=torch.float64)

Will CUDA 10.2 binaries come out soon?

To avoid this error:

    f'Detected that PyTorch and torch_sparse were compiled with '
RuntimeError: Detected that PyTorch and torch_sparse were compiled with different CUDA versions. PyTorch has CUDA version 10.2 and torch_sparse has CUDA version 10.1. Please reinstall the torch_sparse that matches your PyTorch install.

batch matrix multiplication

any idea about how to implement batch matrix multiplication (batch_sparse_matmul)? thanks~

a = torch.rand(10, 4, 5).to_sparse()
b = torch.rand(10, 5, 6)
assert batch_sparse_matmul(a, b).size() == (10, 4, 6)

can`t install torch-sparse

Hi. Though i have try many time ,i still can`t install torch-sparse.

It says that:
Running setup.py install for torch-sparse ... error
ERROR: Command errored out with exit status 1:
command: 'C:\Users\56784\AppData\Local\conda\conda\envs\my_root\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\Users\56784\AppData\Local\Temp\pip-install-pp8x8xof\torch-sparse\setup.py'"'"'; file='"'"'C:\Users\56784\AppData\Local\Temp\pip-install-pp8x8xof\torch-sparse\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record 'C:\Users\56784\AppData\Local\Temp\pip-record-y00on3tr\install-record.txt' --single-version-externally-managed --compile
cwd: C:\Users\56784\AppData\Local\Temp\pip-install-pp8x8xof\torch-sparse
Complete output (53 lines):
running install
running build
running build_py
creating build
creating build\lib.win-amd64-3.6
creating build\lib.win-amd64-3.6\test
copying test\test_coalesce.py -> build\lib.win-amd64-3.6\test
copying test\test_convert.py -> build\lib.win-amd64-3.6\test
copying test\test_eye.py -> build\lib.win-amd64-3.6\test
copying test\test_spmm.py -> build\lib.win-amd64-3.6\test
copying test\test_spspmm.py -> build\lib.win-amd64-3.6\test
copying test\test_spspmm_spmm.py -> build\lib.win-amd64-3.6\test
copying test\test_transpose.py -> build\lib.win-amd64-3.6\test
copying test\utils.py -> build\lib.win-amd64-3.6\test
copying test_init_.py -> build\lib.win-amd64-3.6\test
creating build\lib.win-amd64-3.6\torch_sparse
copying torch_sparse\coalesce.py -> build\lib.win-amd64-3.6\torch_sparse
copying torch_sparse\convert.py -> build\lib.win-amd64-3.6\torch_sparse
copying torch_sparse\eye.py -> build\lib.win-amd64-3.6\torch_sparse
copying torch_sparse\spmm.py -> build\lib.win-amd64-3.6\torch_sparse
copying torch_sparse\spspmm.py -> build\lib.win-amd64-3.6\torch_sparse
copying torch_sparse\transpose.py -> build\lib.win-amd64-3.6\torch_sparse
copying torch_sparse_init_.py -> build\lib.win-amd64-3.6\torch_sparse
creating build\lib.win-amd64-3.6\torch_sparse\utils
copying torch_sparse\utils\unique.py -> build\lib.win-amd64-3.6\torch_sparse\utils
copying torch_sparse\utils_init_.py -> build\lib.win-amd64-3.6\torch_sparse\utils
running build_ext
C:\Users\56784\AppData\Local\conda\conda\envs\my_root\lib\site-packages\torch\utils\cpp_extension.py:88: UserWarning: Error checking compiler version: [WinError 2] ็ณป็ปŸๆ‰พไธๅˆฐๆŒ‡ๅฎš็š„ๆ–‡ไปถใ€‚
warnings.warn('Error checking compiler version: {}'.format(error))
C:\Users\56784\AppData\Local\conda\conda\envs\my_root\lib\site-packages\torch\utils\cpp_extension.py:114: UserWarning:

                               !! WARNING !!

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Your compiler (cl) may be ABI-incompatible with PyTorch!
Please use a compiler that is ABI-compatible with GCC 4.9 and above.
See https://gcc.gnu.org/onlinedocs/libstdc++/manual/abi.html.

See https://gist.github.com/goldsborough/d466f43e8ffc948ff92de7486c5216d6
for instructions on how to install GCC 4.9 or higher.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

                              !! WARNING !!

  warnings.warn(ABI_INCOMPATIBILITY_WARNING.format(compiler))
building 'torch_sparse.spspmm_cpu' extension
creating build\temp.win-amd64-3.6
creating build\temp.win-amd64-3.6\Release
creating build\temp.win-amd64-3.6\Release\cpu
D:\Program Files\visual\VC\Tools\MSVC\14.22.27905\bin\HostX86\x64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MT -IC:\Users\56784\AppData\Local\conda\conda\envs\my_root\lib\site-packages\torch\lib\include -IC:\Users\56784\AppData\Local\conda\conda\envs\my_root\lib\site-packages\torch\lib\include\TH -IC:\Users\56784\AppData\Local\conda\conda\envs\my_root\lib\site-packages\torch\lib\include\THC -IC:\Users\56784\AppData\Local\conda\conda\envs\my_root\include -IC:\Users\56784\AppData\Local\conda\conda\envs\my_root\include "-ID:\Program Files\visual\VC\Tools\MSVC\14.22.27905\ATLMFC\include" "-ID:\Program Files\visual\VC\Tools\MSVC\14.22.27905\include" "-ID:\Windows Kits\10\include\10.0.18362.0\ucrt" "-ID:\Windows Kits\10\include\10.0.18362.0\shared" "-ID:\Windows Kits\10\include\10.0.18362.0\um" "-ID:\Windows Kits\10\include\10.0.18362.0\winrt" "-ID:\Windows Kits\10\include\10.0.18362.0\cppwinrt" /EHsc /Tpcpu/spspmm.cpp /Fobuild\temp.win-amd64-3.6\Release\cpu/spspmm.obj -DTORCH_EXTENSION_NAME=spspmm_cpu
spspmm.cpp
cpu/spspmm.cpp(1): fatal error C1083: ๆ— ๆณ•ๆ‰“ๅผ€ๅŒ…ๆ‹ฌๆ–‡ไปถ: โ€œtorch/extension.hโ€: No such file or director

y
error: command 'D:\Program Files\visual\VC\Tools\MSVC\14.22.27905\bin\HostX86\x64\cl.exe' failed with exit status 2
----------------------------------------
ERROR: Command errored out with exit status 1: 'C:\Users\56784\AppData\Local\conda\conda\envs\my_root\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\Users\56784\AppData\Local\Temp\pip-install-pp8x8xof\torch-sparse\setup.py'"'"'; file='"'"'C:\Users\56784\AppData\Local\Temp\pip-install-pp8x8xof\torch-sparse\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record 'C:\Users\56784\AppData\Local\Temp\pip-record-y00on3tr\install-record.txt' --single-version-externally-managed --compile Check the logs for full command output.

Giving feedback

Hello,

PyTorch is looking into extending its support for sparse tensors. Perhaps you are interested into giving some feedback:
Sparse tensor use cases
(Close this issue as you want).

Segmentation fault in GPU sparse matrix by sparse matrix product

Hello!
Thanks a lot for this PyTorch extension!
But I faced with the following issue. I tried to run an example of sparse matrix by sparse matrix product from README.md, but on GPU, and get "Segmentation fault" error. Below is the code of my short script to reproduce error.

import torch
from torch_sparse import spspmm
device = torch.device("cuda")

indexA = torch.tensor([[0, 0, 1, 2, 2], [1, 2, 0, 0, 1]], device=device)
valueA = torch.tensor([1, 2, 3, 4, 5], dtype=torch.float, device=device)

indexB = torch.tensor([[0, 2], [1, 0]], device=device)
valueB = torch.tensor([2, 4], dtype=torch.float, device=device)

indexC, valueC = spspmm(indexA, valueA, indexB, valueB, 3, 3, 2)

I use PyTorch 1.0, CUDA 8.0 and install both extensions (pytorch_scatter and pytorch_sparse) from source files in repositories.

use sparse tensor to get each node's neighbors

Hi @rusty1s , about using sparse tensor multiplication to implement sage convolution adj.matmul(x, reduce='mean'), is it possible for me to get each node's neighbors' feature vectors instead of reduced one? Just like the message(self, x_j) function but based on sparse tensor rather than edge index.

type problems reproducing spspmm example from readme

from torch_sparse import spspmm

indexA = torch.tensor([[0, 0, 1, 2, 2], [1, 2, 0, 0, 1]])
valueA = torch.tensor([1, 2, 3, 4, 5], dtype=torch.float)

indexB = torch.tensor([[0, 2], [1, 0]])
valueB = torch.tensor([2, 4], dtype=torch.float)
indexC, valueC = spspmm(indexA, valueA, indexB, valueB, 3, 3, 2)
Traceback (most recent call last):
File "", line 1, in
File "C:\ProgramData\Anaconda3\lib\site-packages\torch_sparse\spspmm.py", line 26, in spspmm
return SpSpMM.apply(indexA, valueA, indexB, valueB, m, k, n)
File "C:\ProgramData\Anaconda3\lib\site-packages\torch_sparse\spspmm.py", line 32, in forward
indexC, valueC = mm(indexA, valueA, indexB, valueB, m, k, n)
File "C:\ProgramData\Anaconda3\lib\site-packages\torch_sparse\spspmm.py", line 68, in mm
indexC, valueC = from_scipy(A.tocsr().dot(B.tocsr()).tocoo())
File "C:\ProgramData\Anaconda3\lib\site-packages\torch_sparse\spspmm.py", line 79, in from_scipy
row, col, value = from_numpy(A.row), from_numpy(A.col), from_numpy(A.data)
TypeError: can't convert np.ndarray of type numpy.int32. The only supported types are: double, float, float16, int64, int32, and uint8.

(Windows 10 CPU-only PyTorch installation.)

The other examples from README work ok, except that "coalesce" transposes value:

from torch_sparse import coalesce

index = torch.tensor([[1, 0, 1, 0, 2, 1],
... [0, 1, 1, 1, 0, 0]])
value = torch.tensor([[1, 2], [2, 3], [3, 4], [4, 5], [5, 6], [6, 7]])
index, value = coalesce(index, value, m=3, n=2)
index
tensor([[0, 1, 1, 2],
[1, 0, 1, 0]])
value
tensor([[6, 8],
[7, 9],
[3, 4],
[5, 6]])

import unique_cuda error while 'from torch_sparse import coalesce'

Hi,

When I install this package using pip and what to import it in python, I got the following error:

Do you have any idea about how to solve it?

Traceback (most recent call last):
File "", line 1, in
File "/home/yiding/anaconda3/envs/pytorch1_1/lib/python3.6/site-packages/torch_sparse/init.py", line 1, in
from .coalesce import coalesce
File "/home/yiding/anaconda3/envs/pytorch1_1/lib/python3.6/site-packages/torch_sparse/coalesce.py", line 4, in
from .utils.unique import unique
File "/home/yiding/anaconda3/envs/pytorch1_1/lib/python3.6/site-packages/torch_sparse/utils/unique.py", line 5, in
import unique_cuda
ImportError: /home/yiding/anaconda3/envs/pytorch1_1/lib/python3.6/site-packages/unique_cuda.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN2at6detail20DynamicCUDAInterface10set_deviceE

Is there a roadmap of this library?

Hi, rusty1s!

Thx for your excellent works! I've noticed that there are some updates of this lib attempting to build more built-in interfaces like those provided in pytorch. May I have your future plan of improving this lib. In particular, I'am curious about that will the features below be supported in the future?

  1. High-dimensional sparse tensor and the corresponding operations, e.g., batched sparse tensor matmul.
  2. Broadcasting semantics for sparse tensors.
  3. A concise sparse tensor class just like torch.sparse_coo_tensor.
  4. (advanced) index and slice operations like what are supported by scipy.sparse module, though scipy.sparse.coo_matrix dose not support index and slice.

The features mentioned above may not be clear, just give me a rough reply. Last but not least, will the lib be merged into pytorch? I've subscribed two issues about sparse tensors in pytorch, but it seems there are almostly no progress and new discussions.

Thx again, have a good day!

ModuleNotFoundError: No module named 'torch_sparse.unique_cuda'

Hello everyone,

I would appreciate your help in the following issue ฮ™ am having. Unfortunately previously raised related threads couldn't help me.
Having installed successfully PyTorch Geometric and all dependencies, I tried to run the introduction example in a terminal. After typing "from torch_geometric.data import Data" I am getting
ModuleNotFoundError: No module named 'torch_sparse.unique_cuda'.
Any recommendations on what to do next?

Thank you in advance for your help!

Here is a printscreen of what I am seeing:
issue_print_screen

Max operation not working in coalesce with negative values

Why does the max operation between negative values give 0 instead of the max of the negative numbers?

Here is the code snippet:

import torch
from torch_sparse import coalesce

index = torch.tensor([[1, 0, 1, 0, 2, 1],
[0, 1, 1, 1, 0, 0]])
value = torch.Tensor([[-11], [-2], [-3], [-4], [-5], [-6]])
value = value.type(torch.float)

index, value = coalesce(index, value, m=3, n=2,op="max")
print(index)
print(value)

Output:

tensor([[0, 1, 1, 2],
[1, 0, 1, 0]])
tensor([[0.],
[0.],
[0.],
[0.]])

About building C++, like that in `pytorch_cluster`

hello,
I manage to build pytorch_sparse's C++ API, like that in torch_sparse.
In this issue, we all know that it is not necessary to have libtorchgeometric.
And I have known that pytorch_cluster, pytorch_scatter, pytorch_spline_conv, C++ API has been provided, but, I find that there is not a C++ API in pytorch_sparse, and I do not know why.

Based on the CMakeLists.txt and cmake/TorchClusterConfig.cmake.in in pytorch_cluster, I write CMakeLists.txt and cmake/TorchSparseConfig.cmake.in in pytorch_sparse.

The files are listed here.

CMakeLists.txt

cmake_minimum_required(VERSION 3.0)
project(torchsparse)
set(CMAKE_CXX_STANDARD 14)
set(TORCHSPARSE_VERSION 1.5.4)

option(WITH_CUDA "Enable CUDA support" OFF)

if(WITH_CUDA)
  enable_language(CUDA)
  add_definitions(-D__CUDA_NO_HALF_OPERATORS__)
endif()

find_package(Python3 COMPONENTS Development)
find_package(Torch REQUIRED)

file(GLOB OPERATOR_SOURCES csrc/cpu/*.h csrc/cpu/*.cpp csrc/*.cpp)
if(WITH_CUDA)
  file(GLOB OPERATOR_SOURCES ${OPERATOR_SOURCES} csrc/cuda/*.h csrc/cuda/*.cu)
endif()

add_library(${PROJECT_NAME} SHARED ${OPERATOR_SOURCES})
target_link_libraries(${PROJECT_NAME} PRIVATE ${TORCH_LIBRARIES} Python3::Python)
set_target_properties(${PROJECT_NAME} PROPERTIES EXPORT_NAME TorchSparse)

target_include_directories(${PROJECT_NAME} INTERFACE
  $<BUILD_INTERFACE:${HEADERS}>
  $<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}>)

include(GNUInstallDirs)
include(CMakePackageConfigHelpers)

set(TORCHSPARSE_CMAKECONFIG_INSTALL_DIR "share/cmake/TorchSparse" CACHE STRING "install path for TorchSparseConfig.cmake")

configure_package_config_file(cmake/TorchSparseConfig.cmake.in
  "${CMAKE_CURRENT_BINARY_DIR}/TorchSparseConfig.cmake"
  INSTALL_DESTINATION ${TORCHSPARSE_CMAKECONFIG_INSTALL_DIR})

write_basic_package_version_file(${CMAKE_CURRENT_BINARY_DIR}/TorchSparseConfigVersion.cmake
  VERSION ${TORCHSPARSE_VERSION}
  COMPATIBILITY AnyNewerVersion)

install(FILES ${CMAKE_CURRENT_BINARY_DIR}/TorchSparseConfig.cmake
  ${CMAKE_CURRENT_BINARY_DIR}/TorchSparseConfigVersion.cmake
  DESTINATION ${TORCHSPARSE_CMAKECONFIG_INSTALL_DIR})

install(TARGETS ${PROJECT_NAME}
  EXPORT TorchSparseTargets
  LIBRARY DESTINATION ${CMAKE_INSTALL_LIBDIR}
  )

install(EXPORT TorchSparseTargets
  NAMESPACE TorchSparse::
  DESTINATION ${TORCHSPARSE_CMAKECONFIG_INSTALL_DIR})

install(FILES ${HEADERS} DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}/${PROJECT_NAME})
install(FILES
  csrc/cpu/convert_cpu.h
  csrc/cpu/diag_cpu.h
  # csrc/cpu/metis_cpu.h # I do not have metis library now, after installing metis, everything can be fine. 
  csrc/cpu/padding_cpu.h
  csrc/cpu/reducer.h
  csrc/cpu/rw_cpu.h
  csrc/cpu/saint_cpu.h
  csrc/cpu/sample_cpu.h
  csrc/cpu/spmm_cpu.h
  csrc/cpu/spspmm_cpu.h
  csrc/cpu/utils.h
  DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}/${PROJECT_NAME}/cpu)
if(WITH_CUDA)
  install(FILES
    csrc/cuda/atomics.cuh
    csrc/cuda/convert_cuda.h
    csrc/cuda/diag_cuda.h
    csrc/cuda/padding_cuda.h
    csrc/cuda/reducer.cuh
    csrc/cuda/rw_cuda.h
    csrc/cuda/spmm_cuda.h
    csrc/cuda/spspmm_cuda.h
    csrc/cuda/utils.cuh
    DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}/${PROJECT_NAME}/cuda)
endif() 

cmake/TorchClusterConfig.cmake.in

# TorchSparseConfig.cmake
# --------------------
#
# Exported targets:: Cluster
#

@PACKAGE_INIT@

set(PN TorchSparse)
set(${PN}_INCLUDE_DIR "${PACKAGE_PREFIX_DIR}/@CMAKE_INSTALL_INCLUDEDIR@")
set(${PN}_LIBRARY "")
set(${PN}_DEFINITIONS USING_${PN})

check_required_components(${PN})


if(NOT (CMAKE_VERSION VERSION_LESS 3.0))
#-----------------------------------------------------------------------------
# Don't include targets if this file is being picked up by another
# project which has already built this as a subproject
#-----------------------------------------------------------------------------
if(NOT TARGET ${PN}::TorchSparse)
include("${CMAKE_CURRENT_LIST_DIR}/${PN}Targets.cmake")

if(NOT TARGET torch_library)
find_package(Torch REQUIRED)
endif()
if(NOT TARGET Python3::Python)
find_package(Python3 COMPONENTS Development)
endif()
target_link_libraries(TorchSparse::TorchSparse INTERFACE ${TORCH_LIBRARIES} Python3::Python)

if(@WITH_CUDA@)
  target_compile_definitions(TorchSparse::TorchSparse INTERFACE WITH_CUDA)
endif()

endif()
endif() 

the building procedure is the same as those in other libraries.
Screenshot_20200529_141334

And I wrote an example about spmm, here is the result? (I do not know much about this aspect, but the results from Python and C++ are the same, as shown in the screenshot)
Screenshot_20200529_151034

this example's code is also listed below:
main.cpp

#include <torch/torch.h>
#include <iostream>
#include "Python.h"
#include <torchsparse/cpu/spmm_cpu.h>
#include <vector>
#include <tuple>
using namespace std;
int main () {
    vector<long> index_ = {0,0,1,2,2,0,2,1,0,1};
    vector<float> value_ = {1,2,4,1,3.};
    vector<float> matrix_ = {1.,4,2,5,3,6};
    torch::Tensor index = torch::tensor(index_).reshape({2, 5});
    torch::Tensor value = torch::tensor(value_);
    torch::Tensor matrix = torch::tensor(matrix_).reshape({3, 2});
    // cout << index[0] << endl;
    std::tuple<torch::Tensor, torch::optional<torch::Tensor>> out = spmm_cpu(index[0], index[1], value, matrix, "sum");
    torch::Tensor out0;
    torch::optional<torch::Tensor> out1;
    tie(out0, out1) = out;
    cout << out0 << endl;
//    cout << out1[0] << endl;
    // cout << out[0] << endl;
    // torch::Tensor out = spmm_value_bw_cpu(index, value, 3, 3, matrix);
    return 0;
}

CMakeLists.txt

cmake_minimum_required(VERSION 3.15)
project(warma)

set(CMAKE_CXX_STANDARD 14)

set(CMAKE_PREFIX_PATH "/home/wmf997/build_software/pytorch/torch")
include_directories("/usr/include/python3.8/")  # for Python.h
include_directories("/home/wmf997/build_software/pytorch_sparse/build_cpp/include")
find_package(Torch REQUIRED)

add_executable(warma main.cpp)

target_link_libraries(warma "${TORCH_LIBRARIES}")  # Do we need to write the code like this?
target_link_libraries(warma "/home/wmf997/build_software/pytorch_sparse/build_cpp/lib/libtorchsparse.so")
# set_property(TARGET dcgan PROPERTY CXX_STANDARD 14)

main.py

import torch
import torch_sparse
index = torch.tensor([0,0,1,2,2,0,2,1,0,1], dtype=torch.long)
index = index.reshape(2, 5)                                                                                                                                                                                              
value = torch.tensor([1,2,4,1,3.])                                                                                                                                                                                       
matrix = torch.tensor([1.,4,2,5,3,6])
matrix = matrix.reshape(3, 2)
a = torch_sparse.tensor.SparseTensor(row=index[0,:], rowptr=index[0,:], col=index[1, :], value=value)
a1 = torch_sparse.matmul(a, matrix)

Yours sincerely,
@WMF1997

RuntimeError: cuda runtime error when using spspmm on cuda

Hi I'm trying to use spspmm on cuda to calculate the power of a 1086*1086 matrix with 66442 non-zero elements, and I encountered "cuda runtime error". The complete error message is as follows:

THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1556653114079/work/aten/src/THC/THCCachingHostAllocator.cpp line=265 error=4 : unspecified launch failure
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/workspace/ana3/lib/python3.7/site-packages/torch_sparse/spspmm.py", line 26, in spspmm
    index, value = SpSpMM.apply(indexA, valueA, indexB, valueB, m, k, n)
  File "/workspace/ana3/lib/python3.7/site-packages/torch_sparse/spspmm.py", line 33, in forward
    indexC, valueC = mm(indexA, valueA, indexB, valueB, m, k, n)
  File "/workspace/ana3/lib/python3.7/site-packages/torch_sparse/spspmm.py", line 81, in mm
    m, k, n)
RuntimeError: cuda runtime error (4) : unspecified launch failure at /opt/conda/conda-bld/pytorch_1556653114079/work/aten/src/THC/THCCachingHostAllocator.cpp:265

And the operation is fine on cpu.

I'm using Ubuntu 16.04 and cuda 10.1 inside a nvidia-docker container. My Python is 3.7.3, my scipy is 1.13.0 (latest from pip), both my torch_sparse and torch_scatter are 1.3.0 (latest from pip), and my PyTorch is 1.1.0 (installed from conda)

I cannot reproduce this error using smaller matrix, so I'm afraid I have to provide the original huge matrix. If you want to reproduce this error, please download spspmm.zip and use this snippet:

import pickle
with open('spspmm.zip', 'rb') as f:    # sorry it's not really a zip file
    ei, ew, N = pickle.load(f)
from torch_sparse import spspmm
print(spspmm(ei, ew, ei, ew, N, N, N))    # works fine on cpu
print(spspmm(ei.cuda(), ew.cuda(), ei.cuda(), ew.cuda(), N, N, N))    # produce errors on cuda

I saw in another issue that you suggest calculate spspmm on cpu, but to me that seems ugly and inefficient. Do you have any idea how to fix this problem on cuda device? Thanks!

reshape

Hi,
This repository is great! I just discovered it and I would like to use it.

Is there an equivalent of torch.Tensor.reshape implemented for SparseTensor?
A function that would do the folowing operation:

def spreshape(sp, ncol):
    """
    reshape a sparse matrix
    """
    row, col, val = sp.coo()
    index = row * sp.size(1) + col
    return SparseTensor(
        row=index // ncol,
        col=index % ncol,
        value=val
    )

PS What does the function sparse_resize do? Obviously not what I want^^

SPMM Example

I was trying to use the SPMM example and I noticed it's using mul_kernel_cuda from pytorch and scatter_add from pytorch_scatter and that function is also using pytorch official scatter_add_ functions. I was wondering if I'm calling it wrong?

Also, I think the instruction in README for build from source should to this:

python setup.py install

Failed to import torch_sparse

Pytorch version : 1.4.0
Pytorch_sparse version: 0.5.1
Cuda version : 10.1
Build torch_sparse: source

When importing torch_sparse it fails to detect that it was indeed built with CUDA 10.1, as Pytorch.

$ python -c "import torch_sparse"
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/localscratch/coulombc.5893633.0/.torch/lib/python3.6/site-packages/torch_sparse/__init__.py", line 36, in <module>
    f'Detected that PyTorch and torch_sparse were compiled with '
RuntimeError: Detected that PyTorch and torch_sparse were compiled with different CUDA versions. PyTorch has CUDA version 10.1 and torch_sparse has CUDA version 0.0. Please reinstall the torch_sparse that matches your PyTorch install.

as

cuda_version = torch.ops.torch_sparse.cuda_version()
somehow returns -1.

The wheel was built on a host that does not have a GPU card, but the import also fails on a host with GPU device.

Cannot import torch.sparse.SparseTensor

While performing:-
import torch
import torch_sparse.SparseTensor as sparse

I get the following error:-
ModuleNotFoundError Traceback (most recent call last)
in
1 import torch
----> 2 import torch_sparse.SparseTensor as sparse`

ModuleNotFoundError: No module named 'torch_sparse.SparseTensor'

I am using pytorch==1.3.1, pytorch-sparse==0.4.4

can ur work process mutlti-dimensional(such as 3d) matrix multiplication?

hi, can your work can process multi-dimensional matrix multiplication like torch.matmul? For example,
A is a ijk matrix,
B is a lj matrix,
C=torch.matmul(B, A),
in this case C is a i
l*k tensor. But it seems that your work can't do the same calculation, is that right?
import torch A = torch.randn(50, 60, 70) B = torch.randn(40, 60) C = torch.matmul(B, A) C.size() Out[5]: torch.Size([50, 40, 70])

Install: expected str instance, list found

Hi,
I'm trying to install pytorch_sparse from my docker container. I've based my image on a cuda9.0 image.

I think I have all needed dependencies.

When I try to run python setup.py install:

running install
running bdist_egg
running egg_info
writing torch_sparse.egg-info/PKG-INFO
writing dependency_links to torch_sparse.egg-info/dependency_links.txt
writing requirements to torch_sparse.egg-info/requires.txt
writing top-level names to torch_sparse.egg-info/top_level.txt
reading manifest file 'torch_sparse.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
writing manifest file 'torch_sparse.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_py
creating build/lib.linux-x86_64-3.6/torch_sparse
copying torch_sparse/eye.py -> build/lib.linux-x86_64-3.6/torch_sparse
copying torch_sparse/coalesce.py -> build/lib.linux-x86_64-3.6/torch_sparse
copying torch_sparse/init.py -> build/lib.linux-x86_64-3.6/torch_sparse
copying torch_sparse/spspmm.py -> build/lib.linux-x86_64-3.6/torch_sparse
copying torch_sparse/transpose.py -> build/lib.linux-x86_64-3.6/torch_sparse
copying torch_sparse/spmm.py -> build/lib.linux-x86_64-3.6/torch_sparse
creating build/lib.linux-x86_64-3.6/test
copying test/test_spspmm.py -> build/lib.linux-x86_64-3.6/test
copying test/test_eye.py -> build/lib.linux-x86_64-3.6/test
copying test/test_coalesce.py -> build/lib.linux-x86_64-3.6/test
copying test/init.py -> build/lib.linux-x86_64-3.6/test
copying test/utils.py -> build/lib.linux-x86_64-3.6/test
copying test/test_transpose.py -> build/lib.linux-x86_64-3.6/test
copying test/test_spmm.py -> build/lib.linux-x86_64-3.6/test
creating build/lib.linux-x86_64-3.6/torch_sparse/utils
copying torch_sparse/utils/init.py -> build/lib.linux-x86_64-3.6/torch_sparse/utils
copying torch_sparse/utils/unique.py -> build/lib.linux-x86_64-3.6/torch_sparse/utils
running build_ext
building 'spspmm_cuda' extension
gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -m64 -fPIC -m64 -fPIC -fPIC -I/opt/conda/lib/python3.6/site-packages/torch/lib/include -I/opt/conda/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -I/opt/conda/lib/python3.6/site-packages/torch/lib/include/TH -I/opt/conda/lib/python3.6/site-packages/torch/lib/include/THC -I/usr/local/cuda/include -I/opt/conda/include/python3.6m -c cuda/spspmm.cpp -o build/temp.linux-x86_64-3.6/cuda/spspmm.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=spspmm_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
In file included from cuda/spspmm.cpp:1:0:
/opt/conda/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include/torch/torch.h:7:2: warning: #warning "Including torch/torch.h for C++ extensions is deprecated. Please include torch/extension.h" [-Wcpp]
#warning
^
/usr/local/cuda/bin/nvcc -I/opt/conda/lib/python3.6/site-packages/torch/lib/include -I/opt/conda/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -I/opt/conda/lib/python3.6/site-packages/torch/lib/include/TH -I/opt/conda/lib/python3.6/site-packages/torch/lib/include/THC -I/usr/local/cuda/include -I/opt/conda/include/python3.6m -c cuda/spspmm_kernel.cu -o build/temp.linux-x86_64-3.6/cuda/spspmm_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --compiler-options '-fPIC' -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=spspmm_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
Traceback (most recent call last):
File "setup.py", line 44, in
packages=find_packages(),
File "/opt/conda/lib/python3.6/site-packages/setuptools/init.py", line 143, in setup
return distutils.core.setup(**attrs)
File "/opt/conda/lib/python3.6/distutils/core.py", line 148, in setup
dist.run_commands()
File "/opt/conda/lib/python3.6/distutils/dist.py", line 955, in run_commands
self.run_command(cmd)
File "/opt/conda/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/opt/conda/lib/python3.6/site-packages/setuptools/command/install.py", line 67, in run
self.do_egg_install()
File "/opt/conda/lib/python3.6/site-packages/setuptools/command/install.py", line 109, in do_egg_install
self.run_command('bdist_egg')
File "/opt/conda/lib/python3.6/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/opt/conda/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/opt/conda/lib/python3.6/site-packages/setuptools/command/bdist_egg.py", line 172, in run
cmd = self.call_command('install_lib', warn_dir=0)
File "/opt/conda/lib/python3.6/site-packages/setuptools/command/bdist_egg.py", line 158, in call_command
self.run_command(cmdname)
File "/opt/conda/lib/python3.6/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/opt/conda/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/opt/conda/lib/python3.6/site-packages/setuptools/command/install_lib.py", line 11, in run
self.build()
File "/opt/conda/lib/python3.6/distutils/command/install_lib.py", line 107, in build
self.run_command('build_ext')
File "/opt/conda/lib/python3.6/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/opt/conda/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/opt/conda/lib/python3.6/site-packages/setuptools/command/build_ext.py", line 78, in run
_build_ext.run(self)
File "/opt/conda/lib/python3.6/distutils/command/build_ext.py", line 339, in run
self.build_extensions()
File "/opt/conda/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 343, in build_extensions
build_ext.build_extensions(self)
File "/opt/conda/lib/python3.6/distutils/command/build_ext.py", line 448, in build_extensions
self._build_extensions_serial()
File "/opt/conda/lib/python3.6/distutils/command/build_ext.py", line 473, in _build_extensions_serial
self.build_extension(ext)
File "/opt/conda/lib/python3.6/site-packages/setuptools/command/build_ext.py", line 199, in build_extension
_build_ext.build_extension(self, ext)
File "/opt/conda/lib/python3.6/distutils/command/build_ext.py", line 558, in build_extension
target_lang=language)
File "/opt/conda/lib/python3.6/distutils/ccompiler.py", line 717, in link_shared_object
extra_preargs, extra_postargs, build_temp, target_lang)
File "/opt/conda/lib/python3.6/distutils/unixccompiler.py", line 196, in link
self.spawn(linker + ld_args)
File "/opt/conda/lib/python3.6/distutils/ccompiler.py", line 909, in spawn
spawn(cmd, dry_run=self.dry_run)
File "/opt/conda/lib/python3.6/distutils/spawn.py", line 36, in spawn
_spawn_posix(cmd, search_path, dry_run=dry_run)
File "/opt/conda/lib/python3.6/distutils/spawn.py", line 89, in _spawn_posix
log.info(' '.join(cmd))
TypeError: sequence item 15: expected str instance, list found

Here is my configuration:

$ python -c "import torch; print(torch.__version__); print(torch.cuda.is_available())"
1.0.0
True
$ echo $PATH
/opt/conda/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
$ echo $CPATH
/usr/local/cuda/include

Am I missing something ?

Thanks

Issue importing torch_sparse

I'm getting an error when importing torch_sparse. I have done a fresh installation of torch, with version 1.4.0. This is in order to get torch_geometric up to date. But I'm running into an error here:

>>> import torch_sparse
Traceback (most recent call last):
  File "/global/homes/d/danieltm/.local/cori/pytorchv1.4.0-gpu/lib/python3.7/site-packages/torch_sparse/__init__.py", line 14, in <module>
    library, [osp.dirname(__file__)]).origin)
  File "/global/homes/d/danieltm/.local/cori/pytorchv1.4.0-gpu/lib/python3.7/site-packages/torch/_ops.py", line 106, in load_library
    ctypes.CDLL(path)
  File "/usr/common/software/pytorch/v1.4.0-gpu/lib/python3.7/ctypes/__init__.py", line 364, in __init__
    self._handle = _dlopen(self._name, mode)
OSError: /global/u2/d/danieltm/.local/cori/pytorchv1.4.0-gpu/lib/python3.7/site-packages/torch_sparse/_version.so: undefined symbol: _ZN5torch3jit17parseSchemaOrNameERKSs

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/global/homes/d/danieltm/.local/cori/pytorchv1.4.0-gpu/lib/python3.7/site-packages/torch_sparse/__init__.py", line 22, in <module>
    raise OSError(e)
OSError: /global/u2/d/danieltm/.local/cori/pytorchv1.4.0-gpu/lib/python3.7/site-packages/torch_sparse/_version.so: undefined symbol: _ZN5torch3jit17parseSchemaOrNameERKSs

Do you know what this issue is related to?

ImportError: libcusparse.so.10.0: cannot open shared object file: No such file or directory

Hi, Thanks for your torch-geometric codes.
I have some problems that is really confused me.

GPU: TITAN RTX
CUDA: 10.0
PyTorch: 1.1.0

I followed the https://rusty1s.github.io/pytorch_geometric/build/html/notes/installation.html to install torch-sparse, but the error was reported when I tried to run my code.

echo $PATH
/ENV/anaconda3/envs/TORCH1.1/bin:/usr/local/cuda/bin:/usr/local/cuda/bin:/ENV/anaconda3/bin:/home/titian/bin:/home/titian/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin

echo $CPATH
/usr/local/cuda/include:/usr/local/cuda/include:

Thanks again!

Error while installing with pytorch 1.1

This is just a tip for any one having trouble compiling torch_sparse with the current pytorch version 1.1.

I ran into errors when I tried to install this package with a pytorch installation 1.1 which disappeared when I uninstalled it and instead used pytorch version 1.0.

I can post the exact errors for reference.

pip install error

Thanks for sharing the package.
While running the pip installation, I got this error.

/usr/bin/locale:18:23: error: expected declaration before โ€˜}โ€™ token
     \xeb\xc7\x80    I\xa3\xcer\xba\xe9\xd6\xfe\xff\xffD  f\xf7  \x85\xff\xff\xffH\x89\xfb๋ ‹\x82_  \xc6 H\x83\xc1\x85\xd2\x85b\xff\xff\xffH\x8d\xb5`\xdf\xff\xff\xba\xf0@ H\x89\x8dX\xdf\xff\xff\xe8\xb3\xe7\xff\xffH\x85\xc0H\x8b\x8dX\xdf\xff\xff\x84:\xff\xff\xffH\x89\xdf\xe8k  H\x8d\xb5`\xdf\xff\xff\xba\xf0@ H\x89\xc7\xe87\xea\xff\xffH\x8b\x8dX\xdf\xff\xffH\x89\xcb\xe98\xff\xff\xffH\x85\xc0\x84O\xfd\xff\xffI\x89\xdd\xe9\x84\xfd\xff\xffH\x8b\x85h\xdf\xff\xffH\x8d\xb5`\xdf\xff\xff\xba\xf0@ H\x8b<H\x83\xc7\xe8G\xe7\xff\xffH\x85\xc0\x85\x89\xfc\xff\xff\x83\xbdP\xdf\xff\xff u H\x8b=\xae^  H\x8bG(H;G0\x83\xb9  H\x8dPH\x89W(\xc6
                           ^
    error: command 'gcc' failed with exit status 1
    ----------------------------------------
ERROR: Command errored out with exit status 1:

Could you help me to solve this issue?

Can SpSpMM back propagation be improved?

Not a bug but I am seeing a decrease in performance in the backpropagation when I changed from dense representation to sparse matrices using pytorch_sparse. After profiling using torch.utils.bottleneck, I see something like the following on a single machine. I wonder if SpSpMM backprop could be improved for performance.

 ncalls  tottime  percall  cumtime  percall filename:lineno(function)
       28   56.723    2.026   56.723    2.026 {method 'run_backward' of 'torch._C._EngineBase' objects}
      576    2.685    0.005    2.685    0.005 {built-in method _unique}
     1408    2.349    0.002    2.349    0.002 {method 'scatter_add_' of 'torch._C._TensorBase' objects}
      832    1.252    0.002    4.078    0.005 /anaconda3/envs/hep/lib/python3.6/site-packages/torch_sparse/spmm.py:4(spmm)
     1280    0.760    0.001    0.760    0.001 {built-in method cat}
      448    0.720    0.002    0.720    0.002 {built-in method tanh}
     1408    0.449    0.000    0.449    0.000 {method 'new_full' of 'torch._C._TensorBase' objects}
      608    0.263    0.000    0.263    0.000 {method 'matmul' of 'torch._C._TensorBase' objects}
      128    0.199    0.002    0.199    0.002 {built-in method tensor}
      576    0.111    0.000    2.900    0.005 /anaconda3/envs/hep/lib/python3.6/site-packages/torch_sparse/coalesce.py:7(coalesce)
     1536    0.069    0.000    0.069    0.000 {built-in method stack}
      608    0.060    0.000    0.330    0.001 /anaconda3/envs/hep/lib/python3.6/site-packages/torch/nn/functional.py:1336(linear)
     1472    0.058    0.000    0.058    0.000 {method 'read' of '_io.BufferedReader' objects}
  1856/32    0.044    0.000    9.348    0.292 /anaconda3/envs/hep/lib/python3.6/site-packages/torch/nn/modules/module.py:483(__call__)
     3458    0.040    0.000    0.040    0.000 {method 'reduce' of 'numpy.ufunc' objects}

You can see the majority of the backprop comes from SpSpMM as shown below.

------------------  ---------------  ---------------  ---------------  ---------------  ---------------
Name                       CPU time        CUDA time            Calls        CPU total       CUDA total
------------------  ---------------  ---------------  ---------------  ---------------  ---------------
SpSpMMBackward         359986.000us          0.000us                1     359986.000us          0.000us
SpSpMMBackward         355614.000us          0.000us                1     355614.000us          0.000us
SpSpMMBackward         346439.000us          0.000us                1     346439.000us          0.000us
SpSpMMBackward         330789.000us          0.000us                1     330789.000us          0.000us
SpSpMMBackward         318372.000us          0.000us                1     318372.000us          0.000us
SpSpMMBackward         314331.000us          0.000us                1     314331.000us          0.000us
SpSpMMBackward         304800.000us          0.000us                1     304800.000us          0.000us
SpSpMMBackward         304651.000us          0.000us                1     304651.000us          0.000us
SpSpMMBackward         302225.000us          0.000us                1     302225.000us          0.000us
SpSpMMBackward         299735.000us          0.000us                1     299735.000us          0.000us
SpSpMMBackward         299507.000us          0.000us                1     299507.000us          0.000us
SpSpMMBackward         298172.000us          0.000us                1     298172.000us          0.000us
SpSpMMBackward         298137.000us          0.000us                1     298137.000us          0.000us
SpSpMMBackward         297997.000us          0.000us                1     297997.000us          0.000us
SpSpMMBackward         297528.000us          0.000us                1     297528.000us          0.000us

segmentation fault when testing

Hi, rusty1s,

I've install the pytorch_scatter in a conda virtual env. And I encoountered segmentation fault when testing.
I've checked version, cuda=10.0, torch=1.0.1.post2,gcc=7.3

Thank you!

Import error on Windows 10 when using Python 3.8+

When trying to import PyG's GCNConv, I run into the following torch-sparse error:

Traceback (most recent call last):
  File "yard_train.py", line 15, in <module>
    from yard_net import YardNet
  File "C:\Users\<username>\Desktop\yard\yard_net.py", line 6, in <module>
    from torch_geometric.nn import GCNConv
  File "C:\Users\<username>\Miniconda3\envs\<env_name>\lib\site-packages\torch_geometric\__init__.py", line 2, in <module>
    import torch_geometric.nn
  File "C:\Users\<username>\Miniconda3\envs\<env_name>\lib\site-packages\torch_geometric\nn\__init__.py", line 2, in <module>
    from .data_parallel import DataParallel
  File "C:\Users\<username>\Miniconda3\envs\<env_name>\lib\site-packages\torch_geometric\nn\data_parallel.py", line 5, in <module>
    from torch_geometric.data import Batch
  File "C:\Users\<username>\Miniconda3\envs\<env_name>\lib\site-packages\torch_geometric\data\__init__.py", line 1, in <module>
    from .data import Data
  File "C:\Users\<username>\Miniconda3\envs\<env_name>\lib\site-packages\torch_geometric\data\data.py", line 7, in <module>
    from torch_sparse import coalesce
  File "C:\Users\<username>\Miniconda3\envs\<env_name>\lib\site-packages\torch_sparse\__init__.py", line 12, in <module>
    torch.ops.load_library(importlib.machinery.PathFinder().find_spec(
  File "C:\Users\<username>\Miniconda3\envs\<env_name>\lib\site-packages\torch\_ops.py", line 105, in load_library
    ctypes.CDLL(path)
  File "C:\Users\<username>\Miniconda3\envs\<env_name>\lib\ctypes\__init__.py", line 373, in __init__
    self._handle = _dlopen(self._name, mode)
FileNotFoundError: Could not find module 'C:\Users\<username>\Miniconda3\envs\<env_name>\Lib\site-packages\torch_sparse\_convert.pyd' (or one of its dependencies). Try using the full path with constructor syntax.

A straight-forward import from the console fails in the same manner:

Python 3.8.2 (default, Apr 14 2020, 19:01:40) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch_sparse
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Users\<username>\Miniconda3\envs\<env_name>\lib\site-packages\torch_sparse\__init__.py", line 12, in <module>
    torch.ops.load_library(importlib.machinery.PathFinder().find_spec(
  File "C:\Users\<username>\Miniconda3\envs\<env_name>\lib\site-packages\torch\_ops.py", line 105, in load_library
    ctypes.CDLL(path)
  File "C:\Users\<username>\Miniconda3\envs\<env_name>\lib\ctypes\__init__.py", line 373, in __init__
    self._handle = _dlopen(self._name, mode)
FileNotFoundError: Could not find module 'C:\Users\<username>\Miniconda3\envs\<env_name>\Lib\site-packages\torch_sparse\_convert.pyd' (or one of its dependencies). Try using the full path with constructor syntax.

Checking the directory in question reveals that _convert.pyd does exist at the mentioned path, and the only dependency I see in the documentation (torch-scatter) is installed. The installation of both packages was completed without any problems.

I have the following versions installed:

Windows 10
Python 3.8.2
Pytorch 1.5.0
torch-scatter 2.0.4
torch-sparse 0.6.2
CUDA 10.1

Illegal CUDA Memory Access

index1=torch.LongTensor(2,100).random_(100).cuda()
value1=torch.Tensor(100).random_(100).cuda()
spspmm(index1,value1,index1,value1,100,100,100)

This gives me the following error:

 spspmm(index1,value1,index1,value1,100,100,100)
Traceback (most recent call last):

 File "<ipython-input-18-ce50b836d1c1>", line 1, in <module>

    spspmm(index1,value1,index1,value1,100,100,100)

 File "C:\Users\Michael\Downloads\WinPython-64bit-3.5.2.2Qt5\python-3.5.2.amd64\lib\site-packages\torch_sparse\spspmm.py", line 26, in spspmm

    index, value = SpSpMM.apply(indexA, valueA, indexB, valueB, m, k, n)

  File "C:\Users\Michael\Downloads\WinPython-64bit-3.5.2.2Qt5\python-3.5.2.amd64\lib\site-packages\torch_sparse\spspmm.py", line 33, in forward
    indexC, valueC = mm(indexA, valueA, indexB, valueB, m, k, n)

  File "C:\Users\Michael\Downloads\WinPython-64bit-3.5.2.2Qt5\python-3.5.2.amd64\lib\site-packages\torch_sparse\spspmm.py", line 81, in mm
    m, k, n)
RuntimeError: CUDA error: an illegal memory access was encountered (copy_device_to_device at C:/w/1/s/windows/pytorch/aten/src/ATen/native/cuda/Copy.cu:166)
(no backtrace available) 

Any idea what could be causing it? I run PyTorch 1.2.0 with Windows 10, GTX 1080, and CUDA 9.0

Compilation error: expected primary-expression before token

I'm packaging pytorch_sparse for NixOS.

Bug description

Version: revision 2eff407 (current master HEAD atm)
When building I get many errors such as this one:

/build/pytorch_sparse-2eff407/csrc/cpu/spspmm_cpu.cpp:90:62: error: expected primary-expression before โ€˜>โ€™ token
   90 |         valC_data = optional_valueC.value().data_ptr<scalar_t>();
      |                                                              ^

Investigations

Could be an issue in my build setup but I don't have much knowledge about this issue with gcc.

Seems related to this:

More

For the record:

torch-sparse-cuda = super.buildPythonPackage rec {
        pname = "torch_sparse";
        version = "0.6.0+cuda";

        doCheck = false;

        src = pkgs.fetchgit {
          url = "https://github.com/rusty1s/pytorch_sparse";
          rev = "2eff407270c271424ecaf6b8fbcaa7dc1564ffac";
          sha256 = "1cpjwkxlah0xrv9gv4m3bmm837zv3x5lxqdp4b90nb0am4zf8dsh";
        };

        nativeBuildInputs = with pkgs; [
          which
          gcc
        ];

        buildInputs = with super; [
          pytestrunner
        ];

        propagatedBuildInputs = with super; [
          pytorch
        ];
      };

Unable to install torch-sparse (Windows 10, CUDA 10.1)

Windows 10
CUDA 10.1
PyTorch 1.4.0

I'm having issues getting torch-sparse installed. Any ideas?

Issue 1:

pip install torch-scatter==latest+cu101  torch-sparse==latest+cu101 -f https://pytorch-geometric.com/whl/torch-1.4.0
.html

Returns
Looking in links: https://pytorch-geometric.com/whl/torch-1.4.0.html
ERROR: Could not find a version that satisfies the requirement torch-scatter==latest+cu101 (from versions: latest+cpu, 0.3.0, 1.0.2, 1.0.3, 1.0.4, 1.1.0, 1.1.1, 1.1.2, 1.2.0, 1.3
.0, 1.3.1, 1.3.2, 1.4.0, 2.0.2, 2.0.3, 2.0.3+cpu, 2.0.4, 2.0.4+cpu)
ERROR: No matching distribution found for torch-scatter==latest+cu101

Issue 2: When attempting to install from source, pip install torch-sparse

Returns

Collecting torch-sparse
  Using cached torch_sparse-0.6.0.tar.gz (29 kB)
Requirement already satisfied: scipy in c:\users\caleb\anaconda3\envs\graphstar\lib\site-packages (from torch-sparse) (1.4.1)
Requirement already satisfied: numpy>=1.13.3 in c:\users\caleb\anaconda3\envs\graphstar\lib\site-packages (from scipy->torch-sparse) (1.18.1)
Building wheels for collected packages: torch-sparse
...
Traceback (most recent call last):
    File "<string>", line 1, in <module>
    File "C:\Users\caleb\AppData\Local\Temp\pip-install-yonrwolo\torch-sparse\setup.py", line 81, in <module>
      setup(
    File "C:\Users\caleb\Anaconda3\envs\GraphStar\lib\site-packages\setuptools\__init__.py", line 144, in setup
      return distutils.core.setup(**attrs)
    File "C:\Users\caleb\Anaconda3\envs\GraphStar\lib\distutils\core.py", line 148, in setup
      dist.run_commands()
    File "C:\Users\caleb\Anaconda3\envs\GraphStar\lib\distutils\dist.py", line 966, in run_commands
      self.run_command(cmd)
    File "C:\Users\caleb\Anaconda3\envs\GraphStar\lib\distutils\dist.py", line 985, in run_command
      cmd_obj.run()
    File "C:\Users\caleb\Anaconda3\envs\GraphStar\lib\site-packages\wheel\bdist_wheel.py", line 259, in run
      self.run_command('install')
    File "C:\Users\caleb\Anaconda3\envs\GraphStar\lib\distutils\cmd.py", line 313, in run_command
      self.distribution.run_command(command)
    File "C:\Users\caleb\Anaconda3\envs\GraphStar\lib\distutils\dist.py", line 985, in run_command
      cmd_obj.run()
    File "C:\Users\caleb\Anaconda3\envs\GraphStar\lib\site-packages\setuptools\command\install.py", line 61, in run
      return orig.install.run(self)
    File "C:\Users\caleb\Anaconda3\envs\GraphStar\lib\distutils\command\install.py", line 557, in run
      self.run_command(cmd_name)
    File "C:\Users\caleb\Anaconda3\envs\GraphStar\lib\distutils\cmd.py", line 313, in run_command
      self.distribution.run_command(command)
    File "C:\Users\caleb\Anaconda3\envs\GraphStar\lib\distutils\dist.py", line 985, in run_command
      cmd_obj.run()
    File "C:\Users\caleb\Anaconda3\envs\GraphStar\lib\site-packages\setuptools\command\install_egg_info.py", line 34, in run
      self.run_command('egg_info')
    File "C:\Users\caleb\Anaconda3\envs\GraphStar\lib\distutils\cmd.py", line 313, in run_command
      self.distribution.run_command(command)
    File "C:\Users\caleb\Anaconda3\envs\GraphStar\lib\distutils\dist.py", line 985, in run_command
      cmd_obj.run()
    File "C:\Users\caleb\Anaconda3\envs\GraphStar\lib\site-packages\setuptools\command\egg_info.py", line 297, in run
      self.find_sources()
    File "C:\Users\caleb\Anaconda3\envs\GraphStar\lib\site-packages\setuptools\command\egg_info.py", line 304, in find_sources
      mm.run()
    File "C:\Users\caleb\Anaconda3\envs\GraphStar\lib\site-packages\setuptools\command\egg_info.py", line 535, in run
      self.add_defaults()
    File "C:\Users\caleb\Anaconda3\envs\GraphStar\lib\site-packages\setuptools\command\egg_info.py", line 579, in add_defaults
      self.read_manifest()
    File "C:\Users\caleb\Anaconda3\envs\GraphStar\lib\site-packages\setuptools\command\sdist.py", line 220, in read_manifest
      self.filelist.append(line)
    File "C:\Users\caleb\Anaconda3\envs\GraphStar\lib\site-packages\setuptools\command\egg_info.py", line 477, in append
      path = convert_path(item)
    File "C:\Users\caleb\Anaconda3\envs\GraphStar\lib\distutils\util.py", line 122, in convert_path
      raise ValueError("path '%s' cannot be absolute" % pathname)
  ValueError: path '/Users/rusty1s/github/pytorch_sparse/csrc/convert.cpp' cannot be absolute
  ----------------------------------------
  ERROR: Failed building wheel for torch-sparse
  Running setup.py clean for torch-sparse
Failed to build torch-sparse
Installing collected packages: torch-sparse
...
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "C:\Users\caleb\AppData\Local\Temp\pip-install-yonrwolo\torch-sparse\setup.py", line 81, in <module>
        setup(
      File "C:\Users\caleb\Anaconda3\envs\GraphStar\lib\site-packages\setuptools\__init__.py", line 144, in setup
        return distutils.core.setup(**attrs)
      File "C:\Users\caleb\Anaconda3\envs\GraphStar\lib\distutils\core.py", line 148, in setup
        dist.run_commands()
      File "C:\Users\caleb\Anaconda3\envs\GraphStar\lib\distutils\dist.py", line 966, in run_commands
        self.run_command(cmd)
      File "C:\Users\caleb\Anaconda3\envs\GraphStar\lib\distutils\dist.py", line 985, in run_command
        cmd_obj.run()
      File "C:\Users\caleb\Anaconda3\envs\GraphStar\lib\site-packages\setuptools\command\install.py", line 61, in run
        return orig.install.run(self)
      File "C:\Users\caleb\Anaconda3\envs\GraphStar\lib\distutils\command\install.py", line 557, in run
        self.run_command(cmd_name)
      File "C:\Users\caleb\Anaconda3\envs\GraphStar\lib\distutils\cmd.py", line 313, in run_command
        self.distribution.run_command(command)
      File "C:\Users\caleb\Anaconda3\envs\GraphStar\lib\distutils\dist.py", line 985, in run_command
        cmd_obj.run()
      File "C:\Users\caleb\Anaconda3\envs\GraphStar\lib\site-packages\setuptools\command\install_egg_info.py", line 34, in run
        self.run_command('egg_info')
      File "C:\Users\caleb\Anaconda3\envs\GraphStar\lib\distutils\cmd.py", line 313, in run_command
        self.distribution.run_command(command)
      File "C:\Users\caleb\Anaconda3\envs\GraphStar\lib\distutils\dist.py", line 985, in run_command
        cmd_obj.run()
      File "C:\Users\caleb\Anaconda3\envs\GraphStar\lib\site-packages\setuptools\command\egg_info.py", line 297, in run
        self.find_sources()
      File "C:\Users\caleb\Anaconda3\envs\GraphStar\lib\site-packages\setuptools\command\egg_info.py", line 304, in find_sources
        mm.run()
      File "C:\Users\caleb\Anaconda3\envs\GraphStar\lib\site-packages\setuptools\command\egg_info.py", line 535, in run
        self.add_defaults()
      File "C:\Users\caleb\Anaconda3\envs\GraphStar\lib\site-packages\setuptools\command\egg_info.py", line 579, in add_defaults
        self.read_manifest()
      File "C:\Users\caleb\Anaconda3\envs\GraphStar\lib\site-packages\setuptools\command\sdist.py", line 220, in read_manifest
        self.filelist.append(line)
      File "C:\Users\caleb\Anaconda3\envs\GraphStar\lib\site-packages\setuptools\command\egg_info.py", line 477, in append
        path = convert_path(item)
      File "C:\Users\caleb\Anaconda3\envs\GraphStar\lib\distutils\util.py", line 122, in convert_path
        raise ValueError("path '%s' cannot be absolute" % pathname)
    ValueError: path '/Users/rusty1s/github/pytorch_sparse/csrc/convert.cpp' cannot be absolute
    ----------------------------------------
ERROR: Command errored out with exit status 1: 'C:\Users\caleb\Anaconda3\envs\GraphStar\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\caleb\\
AppData\\Local\\Temp\\pip-install-yonrwolo\\torch-sparse\\setup.py'"'"'; __file__='"'"'C:\\Users\\caleb\\AppData\\Local\\Temp\\pip-install-yonrwolo\\torch-sparse\\setup.py'"'"';f
=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record '
C:\Users\caleb\AppData\Local\Temp\pip-record-s01gqisg\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\Users\caleb\Anaconda3\envs\GraphStar
\Include\torch-sparse' Check the logs for full command output.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.