Giter VIP home page Giter VIP logo

quiver-team / torch-quiver Goto Github PK

View Code? Open in Web Editor NEW
286.0 12.0 37.0 5.07 MB

PyTorch Library for Low-Latency, High-Throughput Graph Learning on GPUs.

Home Page: https://torch-quiver.readthedocs.io/en/latest/

License: Apache License 2.0

CMake 0.95% Shell 0.81% Python 71.36% C++ 13.40% Cuda 11.05% Makefile 2.29% Dockerfile 0.13%
pytorch geometric-deep-learning graph-learning gpu-acceleration graph-neural-networks distributed-computing

torch-quiver's Introduction


Quiver is a distributed graph learning library for PyTorch Geometric (PyG). The goal of Quiver is to make distributed graph learning easy-to-use and achieve high-performance.

Documentation Status


Release 0.2.0 is out!

In the latest release torch-quiver==0.2.0, we have added support for efficient GNN serving and faster feature collection.

High-throughput & Low-latency GNN Serving

Quiver now supports efficient GNN serving. The serving API is simple and easy-to-use. For example, the following code snippet shows how to use Quiver to serve a GNN model:

from torch_geometric.datasets import Reddit
from torch.multiprocessing import Queue
from quiver import AutoBatch, ServingSampler, ServerInference


# Define dataset and sampler
dataset = Reddit(...)

# Instantiate the auto batch component
request_batcher = RequestBatcher(stream_input_queue, ...)
# batched_request_queue_list = [cpu_batched_request_queue_list, gpu_batched_request_queue_list]
batched_queue_list = request_batcher.batched_request_queue_list() 

# Instantiate the sampler component
hybrid_sampler = HybridSampler(dataset, batched_queue_list, ...)
# sampled_request_queue_list = [cpu_sampled_request_queue_list, gpu_sampled_request_queue_list]
sampled_queue_list = hybrid_sampler.sampled_request_queue_list()
hybrid_sampler.start()

# Instantiate the inference server component
server = InferenceServer(model_path, dataset, sampled_queue_list, ...)
# result_queue_list = [Queue, ..., Queue]
result_queue_list = server.result_queue_list() 

server.start()

A full example using Quiver to serve a GNN model with Reddit dataset on a single machine can be found here.

Test Serving

$ cd examples/serving/reddit
$ python prepare_data.py
$ python reddit_serving.py

Key Idea

Quiver's key idea is to exploit workload metrics for predicting the irregular computation of GNN requests, and governing the use of GPUs for graph sampling and feature aggregation: (1) for graph sampling, Quiver calculates the probabilistic sampled graph size, a metric that predicts the degree of parallelism in graph sampling. Quiver uses this metric to assign sampling tasks to GPUs only when the performance gains surpass CPU-based sampling; and (2) for feature aggregation, Quiver relies on the feature access probability to decide which features to partition and replicate across a distributed GPU NUMA topology. Quiver achieves up to 35$\times$ lower latency with a 8$\times$ higher throughput compared to state-of-the-art GNN approaches (DGL and PyG).

Below is a figure that describes a benchmark that evaluates the performance of Quiver in serving situation, PyG (2.0.3) and DGL (1.0.2) on a 2-GPU server that runs the Reddit with GraphSage.

Throughput vs. Latency of GNN request serving


Why Quiver?


The primary motivation for this project is to make it easy to take a PyG program and scale it across many GPUs and CPUs. A typical scenario is: Users can use the easy-to-use APIs of PyG to efficiently develop graph learning programs, and rely on Quiver to run these PyG programs at large scale. To make such scaling effective, Quiver has several novel features:

  • High performance: Quiver enables GPUs to be effectively used in accelerating performance-critical graph learning tasks: graph sampling, feature collection and data-parallel training. Quiver thus often significantly out-perform PyG and DGL even with a single GPU (see benchmark results below), especially when processing large-scale datasets and models.

  • High scalability: Quiver can achieve (super) linear scalability in distributed graph learning. This is contributed by Quiver's novel adaptive data/feature/processor management techniques and effective usage of fast networking technologies (e.g., NVLink and RDMA).

  • Easy to use: To use Quiver, developers only need to add a few lines of code in existing PyG programs. Quiver is thus easy to be adopted by PyG users and deployed in production clusters.

Faster Feature Aggregation

Feature aggregation is one of the performance bottleneck of GNN systems. Quiver enables faster feature aggregation with the following techniques:

  • Quiver uses the feature access probability metric to place popular features strategically on GPUs. A primary objective of feature placement is to enable GPUs to take advantage of low-latency connectivity, such as NVLink and InfiniBand, to their peer GPUs. This allows GPUs to achieve low-latency access to features when aggregating features.

  • Quiver uses GPU kernels that can leverage efficient one-sided reads to access remote features over NVLink/InfiniBand.

More details of our feature aggregation techniques can be found in our repo quiver-feature.

Below is a chart that describes a benchmark that evaluates the performance of Quiver, PyG (2.0.1) and DGL (0.7.0) on a 4-GPU server that runs the Open Graph Benchmark.

e2e_benchmark

We will add multi-node result soon.

For system design details, see Quiver's design overview (Chinese version: 设计简介).

Install


Install Dependence

To install Quiver:

  1. Install Pytorch
  2. Install PyG

Pip Install

$ pip install torch-quiver

We have tested Quiver with the following setup:

  • OS: Ubuntu 18.04, Ubuntu 20.04
  • CUDA: 10.2, 11.1
  • GPU: P100, V100, Titan X, A6000

Install From Source

$ git clone https://github.com/quiver-team/torch-quiver.git && cd torch-quiver
$ QUIVER_ENABLE_CUDA=1 python setup.py install

Test Install

You can download Quiver's examples to test installation:

$ git clone git@github.com:quiver-team/torch-quiver.git && cd torch-quiver
$ python3 examples/pyg/reddit_quiver.py

A successful run should contain the following line:

Epoch xx, Loss: xx.yy, Approx. Train: xx.yy

Use Quiver with Docker

Docker is the simplest way to use Quiver. Check the guide for details.

Quick Start

To use Quiver, you need to replace PyG's graph sampler and feature collector with quiver.Sampler and quiver.Feature. The replacement usually requires only a few changes in existing PyG programs.

Use Quiver in Single-GPU PyG Scripts

Only three steps are required to enable Quiver in a single-GPU PyG script:

import quiver

...

## Step 1: Replace PyG graph sampler
# train_loader = NeighborSampler(data.edge_index, ...) # Comment out PyG sampler
train_loader = torch.utils.data.DataLoader(train_idx) # Quiver: PyTorch Dataloader
quiver_sampler = quiver.pyg.GraphSageSampler(quiver.CSRTopo(data.edge_index), sizes=[25, 10]) # Quiver: Graph sampler

...

## Step 2: Replace PyG feature collectors
# feature = data.x.to(device) # Comment out PyG feature collector
quiver_feature = quiver.Feature(rank=0, device_list=[0]).from_cpu_tensor(data.x) # Quiver: Feature collector

...
  
## Step 3: Train PyG models with Quiver
# for batch_size, n_id, adjs in train_loader: # Comment out PyG training loop
for seeds in train_loader: # Use PyTorch training loop in Quiver
  n_id, batch_size, adjs = quiver_sampler.sample(seeds)  # Use Quiver graph sampler
  batch_feature = quiver_feature[n_id]  # Use Quiver feature collector
  ...
...

Use Quiver in Multi-GPU PyG Scripts

To use Quiver in multi-GPU PyG scripts, we can simply pass quiver.Feature and quiver.Sampler as arguments to the child processes launched in PyTorch's DDP training, as shown below:

import quiver

# PyG DDP function that trains GNN models
def ddp_train(rank, feature, sampler):
  ...

# Replace PyG graph sampler and feature collector with Quiver's alternatives
quiver_sampler = quiver.pyg.GraphSageSampler(...)
quiver_feature = quiver.Feature(...)

mp.spawn(
      ddp_train, 
      args=(quiver_feature, quiver_sampler), # Pass Quiver components as arguments
      nprocs=world_size,
      join=True
  )

A full multi-gpu Quiver example is here.

Run Quiver

Below is an example command that runs a Quiver's script examples/pyg/reddit_quiver.py:

$ python3 examples/pyg/reddit_quiver.py

Quiver has the same launch command on both single-GPU servers and multi-GPU servers. We will provide multi-node examples soon.

Examples

We provide rich examples to show how to enable Quiver in real-world PyG scripts:

Documentation

Quiver provides many parameters to optimise the performance of its graph samplers (e.g., GPU-local or CPU-GPU hybrid) and feature collectors (e.g., feature replication/sharding strategies). Check Documentation for details.

Community

We welcome contributors to join the development of Quiver. Quiver is currently maintained by researchers from the University of Edinburgh, Imperial College London, Tsinghua University and University of Waterloo. The development of Quiver has received the support from Alibaba and Lambda Labs.

Citation

If you find the design of Quiver useful or use Quiver in your work, please cite Quiver with the bibtex below:

@misc{quiver2023,
    author = {Zeyuan Tan, Xiulong Yuan, Congjie He, Man-Kit Sit, Guo Li, Xiaoze Liu, Baole Ai, Kai Zeng, Peter Pietzuch and Luo Mai},
    title = {Quiver: Supporting GPUs for Low-Latency, High-Throughput GNN Serving with Workload Awareness},
    eprint={2305.10863},
    year = {2023}
}

torch-quiver's People

Contributors

austincheang avatar congjiehe avatar eedalong avatar huwan avatar l1nkr avatar lausannel avatar lgarithm avatar luomai avatar ningsir avatar yaox12 avatar yl16417 avatar zenotan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

torch-quiver's Issues

benchmark ogbn-papers100M, CUDA free failed

when running benchmarks/ogbn-papers100M/dist_sampling_ogb_paper100M_quiver.py, it was terminated by the thrown error what(): CUDA free failed: cudaErrorIllegalAddress: an illegal memory access was encountered

here, i set quiver_feature as device_replicate, quiver_sampler as GPU , and Paper100MDataset as gpu_portion=0

dataset = Paper100MDataset(root, 0)

quiver_sampler = quiver.pyg.GraphSageSampler(csr_topo, [15, 10, 5], 0, mode="GPU")

quiver_feature = quiver.Feature(
    rank=0,
    device_list=list(range(world_size)),
    device_cache_size="4G",
    cache_policy="device_replicate",
    csr_topo=csr_topo,
)

here is the error:

Image_20221201155111

it has been known that core-dump was caused by creating a sampler when lazy calling qv.device_quiver_from_csr_array() in a ddp subprocess. i could not understand how this happened in the case that the size of graph csr was 12.89G while the nvidia v100 memory was 32g.

How to run quiver on server with complex GPU topology?

Hi, I want to run quiver's p2p_clique_replicate cache policy on a single server with 4 A100 GPUs. The GPU topology are as follows:
GPU0 GPU1 GPU2 GPU3 CPU Affinity NUMA Affinity
GPU0 X NV12 PXB PXB 0-25,52-77 0
GPU1 NV12 X PXB PXB 0-25,52-77 0
GPU2 PXB PXB X NV12 0-25,52-77 0
GPU3 PXB PXB NV12 X 0-25,52-77 0
There are NVLinks between GPU 0,1 and GPU 2,3.

According to the documentation, there are two cliques(GPU 0,1 and GPU2,3). The cache should be replicate over two cliques. But I found the cache seems to distribute over 4GPUs.
Here is my code(dist_sampling_ogb_reddit_quiver.py, Reddit dataset, feature 500MB):
quiver.init_p2p(device_list=list(range(world_size)))
quiver_feature = quiver.Feature(rank=0, device_list=list(range(world_size)), device_cache_size="0.1G", cache_policy="p2p_clique_replicate", csr_topo=csr_topo)
Theses are what I got:
[0, 1, 2, 3]
LOG>>> P2P Access Initilization
Enable P2P Access Between 0 <---> 1
Enable P2P Access Between 0 <---> 2
Enable P2P Access Between 0 <---> 3
Enable P2P Access Between 1 <---> 2
Enable P2P Access Between 1 <---> 3
Enable P2P Access Between 2 <---> 3
WARNING: You are using p2p_clique_replicate mode, MAKE SURE you have called quiver.init_p2p() to enable p2p access
LOG>>> 76% data cached
LOG>>> GPU [0, 1, 2, 3] belong to the same NUMA Domain
LOG >>> Memory Budge On 0 is 102 MB
LOG >>> Memory Budge On 1 is 102 MB
LOG >>> Memory Budge On 2 is 102 MB
LOG >>> Memory Budge On 3 is 102 MB
Let's use 4 GPUs!
WARNING: You are using p2p_clique_replicate mode, MAKE SURE you have called quiver.init_p2p() to enable p2p access
WARNING: You are using p2p_clique_replicate mode, MAKE SURE you have called quiver.init_p2p() to enable p2p access
WARNING: You are using p2p_clique_replicate mode, MAKE SURE you have called quiver.init_p2p() to enable p2p access
WARNING: You are using p2p_clique_replicate mode, MAKE SURE you have called quiver.init_p2p() to enable p2p access
Epoch: 019, Epoch Time: 0.5197241902351379

So I wonder if there is a solution to enable p2p_clique_replicate on my 4 GPU server.
Thanks~

poor scalability when using multiple gpus

When we use multiple gpus to do sampling with quiver in the mode of gpu sampling(graph stored in gpu memory), we found that the scalability is poor.

To be specific, we run the example code on reddit and the sampling cost is about 1.11s when using 1 gpu. We expect the time cost of sampling using 8 gpus to be about 8x lower since all gpus do sampling independently. However, when 8 gpus are used, the sampling cost is 0.79s, which is much higher than we have expected. In addition, when using 4 gpus, the sampling cost is 0.66s which is lower than the case of 8 gpus.

Could you please give us some insight or explanation about this phenomenon? Thank you so much!

something about loading `sort_feature.pt` in Paper100MDataset

it seems strange when loading feature from sort_feature.pt in the Paper100MDataset class in benchmarks/ogbn-papers100M/dist_sampling_ogb_paper100M_quiver.py

image

i found that the sort_feature has been sorted according to the in-degree order by the statement feature = feature[prev_order] in preprocess.py. in papers100m benchmark, the sorted feature was sorted again. in my opinion, the Paper100MDataset should load feature from feature.pt rather than sort_feature.pt.

is that correct?

Error when cache size is larger than the size of feature

When I try to put all features in gpu memory, an error occur in the function "__getitem__" in file "feature.py".

I guess the root cause is in the function "append" in file "quiver_feature.cu" at line 193, which is "quiverRegister(tensor.data_ptr(), data_size, cudaHostRegisterMapped);".

The error will occur when trying to register zero-copy memory of 0 byte. The problem can be fixed by adding an if statement before registering memory to prevent memory registering when the feature tensor is empty.

Typo in Introduction_cn.md

同时由于这部分数据的访问贷款更高

这样的策略下我们不仅在更多卡加入时能提供更大的缓存,同时由于这部分数据的访问贷款更高,我们便可以实现**特征提取的多卡超线性加速**,即当有两个GPU加入时,两个GPU的特征提取吞吐总速度大于一个只有一个GPU时的特征提取吞吐速度。

贷款→带宽

Performance evaluation about dgl, pyg and torch-quiver

Experiment settings

The experiment settings are as follows:

  • Ubuntu 18.04, P100 GPU, 16G GPU memory;
  • dataset: ogbn;
  • dgl: 0.7.1
  • pyg: 2.0.2
  • torch-quiver: 0.1.0
  • hiddens = 256,neighbors = [15, 10, 5],epochs = 10, batch_size = 1024.

code: pyg, quiver, dgl

Result

metrics: average epoch time.

num_workers = 4

  • dgl: 13.60 S
  • pyg: 30.07 S

num_workers = 8:

  • dgl: 9.78 S
  • pyg: 17.61 S

num_workers = 12:

  • dgl: 8.92 S
  • pyg: 13.42 S

sample on GPU:

  • quiver: 11.04 S
  • dgl: 3.64 S

Question

  1. PyG’s data load is much slower than DGL’s. Can you improve the speed of data load?
  2. Why is the sampling speed of PyG so much slower than DGL?

RPC and all-reduce cannot work together

Run kungfu-run -np 4 python3 benchmarks/ogbn_products_sage/dist_sampling.py --runs 1 --epochs 1

We will get a RPC timeout error like:

Traceback (most recent call last):
[127.0.0.1.10000::stderr]   File "benchmarks/ogbn_products_sage/dist_sampling.py", line 249, in <module>
[127.0.0.1.10000::stderr]     loss, acc = train(epoch)
[127.0.0.1.10000::stderr]   File "benchmarks/ogbn_products_sage/dist_sampling.py", line 168, in train
[127.0.0.1.10000::stderr]     for feature, label, adjs in train_loader:
[127.0.0.1.10000::stderr]   File "/home/guest/python3.7/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 435, in __next__
[127.0.0.1.10000::stderr]     data = self._next_data()
[127.0.0.1.10000::stderr]   File "/home/guest/python3.7/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 475, in _next_data
[127.0.0.1.10000::stderr]     data = self._dataset_fetcher.fetch(index)  # may raise StopIteration
[127.0.0.1.10000::stderr]   File "/home/guest/python3.7/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
[127.0.0.1.10000::stderr]     return self.collate_fn(data)
[127.0.0.1.10000::stderr]   File "/home/guest/python3.7/lib/python3.7/site-packages/quiver/dist_cuda_sampler.py", line 186, in sample
[127.0.0.1.10000::stderr]     y = self.get_data(n_id[:batch_size], False)
[127.0.0.1.10000::stderr]   File "/home/guest/python3.7/lib/python3.7/site-packages/quiver/dist_cuda_sampler.py", line 104, in get_data
[127.0.0.1.10000::stderr]     res[i] = res[i].wait()
[127.0.0.1.10000::stderr] RuntimeError: RPCErr:1:RPC ran for more than set timeout (20000 ms) and will now be marked with an error

OS Unmap stuck

image
When we CUDAUnregister CPU memory (e.g. 10 GB feature), it takes too long in the kernel mode on komodo. However, platypus2 took about a second to finish.
Komodo Linux version: Linux komodo02 5.4.0-88-generic #99-Ubuntu SMP Thu Sep 23 17:29:00 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
Platypus2 Linux version: Linux platypus2 4.15.0-151-generic #157-Ubuntu SMP Fri Jul 9 23:07:57 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

A strange phenomenon about quiver sampler

Hi, thank you for your wonderful opensouce works about Quiver. Recently I was trying to look deep into the sampler module, but met a quite strange phenomenon. We can see the following picture. I was testing about the quiver sampler speed and model training speed.

Machine 1: Tesla V100-SXM2-16GB, Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz, 20 physical CPU cores.
Model: GraphSage, batch_size=128, samples=[25,10], Reddit.

屏幕快照 2022-02-28 上午10 45 07
It seems very strange, because the time cost by sampler and model training is less than only using sampler.

I try several methods to figure out the reason, like DataLoader, CUDA Stream, etc, but still confused. Maybe you can help me?

CUDA

Excuse me, I am confused by your CSRSample kernel. The block size is 128, but the title size is 64, actually half of threads in a block are not used. Can you explain about it to me?

Provide a newer dockerfile

FROM pytorch/pytorch:1.10.0-cuda11.3-cudnn8-devel

# Install PyG.
RUN CPATH=/usr/local/cuda/include:$CPATH && \
    LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH && \
    DYLD_LIBRARY_PATH=/usr/local/cuda/lib:$DYLD_LIBRARY_PATH

RUN pip install scipy==1.5.0

RUN pip install --no-index torch-scatter -f https://data.pyg.org/whl/torch-1.10.0+cu113.html && \
    pip install --no-index torch-sparse -f https://data.pyg.org/whl/torch-1.10.0+cu113.html && \
    pip install torch-geometric

WORKDIR /quiver
ADD . .
RUN pip install -v .

# Set the default command to python3.
CMD ["python3"]

It works on my host machine : Ubuntu 18.04 , NVIDIA-SMI 515.65.01 Driver Version: 515.65.01 CUDA Version: 11.7,Tesla V100-SXM2.

use customised stream in thrust APIs

@xiaoming-qxm @baoleai we can use thrust::cuda::par.on(stream).

e.g.

#include <cuda_runtime.h>

#include <thrust/device_vector.h>
#include <thrust/transform.h>

template <typename T>
class cap_by
{
    const T cap;

  public:
    cap_by(const T cap) : cap(cap) {}

    __host__ __device__ T operator()(T x) const
    {
        if (x > cap) { return cap; }
        return x;
    }
};

void f(cudaStream_t stream)
{
    int n = 1 << 10;
    using T = int;
    thrust::device_vector<T> xs(n);
    thrust::device_vector<T> ys(n);

    thrust::sort(thrust::cuda::par.on(stream), xs.begin(), xs.end());
    int k = 5;
    thrust::transform(thrust::cuda::par.on(stream), xs.begin(), xs.end(),
                      ys.begin(), cap_by<T>(k));
}

int main()
{
    cudaStream_t stream;
    cudaStreamCreate(&stream);
    f(stream);
    cudaStreamDestroy(stream);
    return 0;
}

Failed to build torch-quiver

Hello, I tried both pip install and source install but the build fails. error log:

2022-01-16T23:31:05,187 Using pip 20.3.3 from /home/user/miniconda3/envs/env/lib/python3.8/site-packages/pip (python 3.8)
2022-01-16T23:31:05,188 Non-user install because site-packages writeable
2022-01-16T23:31:05,321 Ignoring indexes: https://pypi.org/simple
2022-01-16T23:31:05,321 Created temporary directory: /tmp/pip-ephem-wheel-cache-ssko_7ys
2022-01-16T23:31:05,322 Created temporary directory: /tmp/pip-req-tracker-jt_5iad6
2022-01-16T23:31:05,322 Initialized build tracking at /tmp/pip-req-tracker-jt_5iad6
2022-01-16T23:31:05,322 Created build tracker: /tmp/pip-req-tracker-jt_5iad6
2022-01-16T23:31:05,322 Entered build tracker: /tmp/pip-req-tracker-jt_5iad6
2022-01-16T23:31:05,322 Created temporary directory: /tmp/pip-install-evkuymo2
2022-01-16T23:31:05,329 Processing /home/user/torch-quiver
2022-01-16T23:31:05,329 Created temporary directory: /tmp/pip-req-build-wc0b161h
2022-01-16T23:31:05,344 Added file:///home/user/torch-quiver to build tracker '/tmp/pip-req-tracker-jt_5iad6'
2022-01-16T23:31:05,344 Running setup.py (path:/tmp/pip-req-build-wc0b161h/setup.py) egg_info for package from file:///home/user/torch-quiver
2022-01-16T23:31:05,344 Created temporary directory: /tmp/pip-pip-egg-info-qjj68kst
2022-01-16T23:31:05,344 Running command python setup.py egg_info
2022-01-16T23:31:06,279 running egg_info
2022-01-16T23:31:06,279 creating /tmp/pip-pip-egg-info-qjj68kst/torch_quiver.egg-info
2022-01-16T23:31:06,279 writing /tmp/pip-pip-egg-info-qjj68kst/torch_quiver.egg-info/PKG-INFO
2022-01-16T23:31:06,280 writing dependency_links to /tmp/pip-pip-egg-info-qjj68kst/torch_quiver.egg-info/dependency_links.txt
2022-01-16T23:31:06,280 writing top-level names to /tmp/pip-pip-egg-info-qjj68kst/torch_quiver.egg-info/top_level.txt
2022-01-16T23:31:06,280 writing manifest file '/tmp/pip-pip-egg-info-qjj68kst/torch_quiver.egg-info/SOURCES.txt'
2022-01-16T23:31:06,298 reading manifest file '/tmp/pip-pip-egg-info-qjj68kst/torch_quiver.egg-info/SOURCES.txt'
2022-01-16T23:31:06,299 reading manifest template 'MANIFEST.in'
2022-01-16T23:31:06,299 /home/user/miniconda3/envs/env/lib/python3.8/distutils/extension.py:131: UserWarning: Unknown Extension options: 'with_cuda'
2022-01-16T23:31:06,299 warnings.warn(msg)
2022-01-16T23:31:06,299 warning: no files found matching 'README'
2022-01-16T23:31:06,302 writing manifest file '/tmp/pip-pip-egg-info-qjj68kst/torch_quiver.egg-info/SOURCES.txt'
2022-01-16T23:31:06,452 Source in /tmp/pip-req-build-wc0b161h has version 0.1.0, which satisfies requirement torch-quiver==0.1.0 from file:///home/user/torch-quiver
2022-01-16T23:31:06,452 Removed torch-quiver==0.1.0 from file:///home/user/torch-quiver from build tracker '/tmp/pip-req-tracker-jt_5iad6'
2022-01-16T23:31:06,455 Created temporary directory: /tmp/pip-unpack-5yxxj95m
2022-01-16T23:31:06,456 Building wheels for collected packages: torch-quiver
2022-01-16T23:31:06,457 Created temporary directory: /tmp/pip-wheel-bmu8ll6w
2022-01-16T23:31:06,457 Destination directory: /tmp/pip-wheel-bmu8ll6w
2022-01-16T23:31:06,458 Running command /home/user/miniconda3/envs/env/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-wc0b161h/setup.py'"'"'; file='"'"'/tmp/pip-req-build-wc0b161h/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-bmu8ll6w
2022-01-16T23:31:07,308 /home/user/miniconda3/envs/env/lib/python3.8/distutils/extension.py:131: UserWarning: Unknown Extension options: 'with_cuda'
2022-01-16T23:31:07,309 warnings.warn(msg)
2022-01-16T23:31:07,331 running bdist_wheel
2022-01-16T23:31:07,340 running build
2022-01-16T23:31:07,341 running build_py
2022-01-16T23:31:07,356 creating build
2022-01-16T23:31:07,356 creating build/lib.linux-x86_64-3.8
2022-01-16T23:31:07,357 creating build/lib.linux-x86_64-3.8/quiver
2022-01-16T23:31:07,357 copying ./srcs/python/quiver/comm.py -> build/lib.linux-x86_64-3.8/quiver
2022-01-16T23:31:07,357 copying ./srcs/python/quiver/init.py -> build/lib.linux-x86_64-3.8/quiver
2022-01-16T23:31:07,357 copying ./srcs/python/quiver/shard_tensor.py -> build/lib.linux-x86_64-3.8/quiver
2022-01-16T23:31:07,357 copying ./srcs/python/quiver/partition.py -> build/lib.linux-x86_64-3.8/quiver
2022-01-16T23:31:07,357 copying ./srcs/python/quiver/feature.py -> build/lib.linux-x86_64-3.8/quiver
2022-01-16T23:31:07,357 copying ./srcs/python/quiver/utils.py -> build/lib.linux-x86_64-3.8/quiver
2022-01-16T23:31:07,357 copying ./srcs/python/quiver/async_cuda_sampler.py -> build/lib.linux-x86_64-3.8/quiver
2022-01-16T23:31:07,358 creating build/lib.linux-x86_64-3.8/quiver/multiprocessing
2022-01-16T23:31:07,358 copying ./srcs/python/quiver/multiprocessing/init.py -> build/lib.linux-x86_64-3.8/quiver/multiprocessing
2022-01-16T23:31:07,358 copying ./srcs/python/quiver/multiprocessing/reductions.py -> build/lib.linux-x86_64-3.8/quiver/multiprocessing
2022-01-16T23:31:07,358 creating build/lib.linux-x86_64-3.8/quiver/pyg
2022-01-16T23:31:07,358 copying ./srcs/python/quiver/pyg/init.py -> build/lib.linux-x86_64-3.8/quiver/pyg
2022-01-16T23:31:07,358 copying ./srcs/python/quiver/pyg/sage_sampler.py -> build/lib.linux-x86_64-3.8/quiver/pyg
2022-01-16T23:31:07,358 running build_ext
2022-01-16T23:31:07,373 building 'torch_quiver' extension
2022-01-16T23:31:07,373 creating /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8
2022-01-16T23:31:07,374 creating /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs
2022-01-16T23:31:07,374 creating /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp
2022-01-16T23:31:07,374 creating /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src
2022-01-16T23:31:07,374 creating /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver
2022-01-16T23:31:07,374 creating /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/cpu
2022-01-16T23:31:07,374 creating /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/torch
2022-01-16T23:31:07,374 creating /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/cuda
2022-01-16T23:31:07,404 Emitting ninja build file /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/build.ninja...
2022-01-16T23:31:07,404 Compiling objects...
2022-01-16T23:31:07,404 Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
2022-01-16T23:31:07,417 [1/7] /usr/local/cuda/bin/nvcc --generate-dependencies-with-compile --dependency-output /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/cuda/quiver_comm.o.d -I/tmp/pip-req-build-wc0b161h/./srcs/cpp/include -I/usr/local/cuda/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/TH -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/THC -I/home/user/miniconda3/envs/env/include/python3.8 -c -c /tmp/pip-req-build-wc0b161h/srcs/cpp/src/quiver/cuda/quiver_comm.cu -o /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/cuda/quiver_comm.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 --expt-extended-lambda -lnuma -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=torch_quiver -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_61,code=compute_61 -gencode=arch=compute_61,code=sm_61 -std=c++14
2022-01-16T23:31:07,417 FAILED: /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/cuda/quiver_comm.o
2022-01-16T23:31:07,417 /usr/local/cuda/bin/nvcc --generate-dependencies-with-compile --dependency-output /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/cuda/quiver_comm.o.d -I/tmp/pip-req-build-wc0b161h/./srcs/cpp/include -I/usr/local/cuda/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/TH -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/THC -I/home/user/miniconda3/envs/env/include/python3.8 -c -c /tmp/pip-req-build-wc0b161h/srcs/cpp/src/quiver/cuda/quiver_comm.cu -o /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/cuda/quiver_comm.o -D__CUDA_NO_HALF_OPERATORS
-D__CUDA_NO_HALF_CONVERSIONS
_ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 --expt-extended-lambda -lnuma -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=torch_quiver -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_61,code=compute_61 -gencode=arch=compute_61,code=sm_61 -std=c++14
2022-01-16T23:31:07,418 nvcc fatal : Unknown option '-generate-dependencies-with-compile'
2022-01-16T23:31:07,418 [2/7] /usr/local/cuda/bin/nvcc --generate-dependencies-with-compile --dependency-output /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/cuda/quiver_sample.o.d -I/tmp/pip-req-build-wc0b161h/./srcs/cpp/include -I/usr/local/cuda/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/TH -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/THC -I/home/user/miniconda3/envs/env/include/python3.8 -c -c /tmp/pip-req-build-wc0b161h/srcs/cpp/src/quiver/cuda/quiver_sample.cu -o /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/cuda/quiver_sample.o -D__CUDA_NO_HALF_OPERATORS
-D__CUDA_NO_HALF_CONVERSIONS
_ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 --expt-extended-lambda -lnuma -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=torch_quiver -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_61,code=compute_61 -gencode=arch=compute_61,code=sm_61 -std=c++14
2022-01-16T23:31:07,418 FAILED: /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/cuda/quiver_sample.o
2022-01-16T23:31:07,418 /usr/local/cuda/bin/nvcc --generate-dependencies-with-compile --dependency-output /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/cuda/quiver_sample.o.d -I/tmp/pip-req-build-wc0b161h/./srcs/cpp/include -I/usr/local/cuda/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/TH -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/THC -I/home/user/miniconda3/envs/env/include/python3.8 -c -c /tmp/pip-req-build-wc0b161h/srcs/cpp/src/quiver/cuda/quiver_sample.cu -o /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/cuda/quiver_sample.o -D__CUDA_NO_HALF_OPERATORS
-D__CUDA_NO_HALF_CONVERSIONS
_ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 --expt-extended-lambda -lnuma -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=torch_quiver -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_61,code=compute_61 -gencode=arch=compute_61,code=sm_61 -std=c++14
2022-01-16T23:31:07,418 nvcc fatal : Unknown option '-generate-dependencies-with-compile'
2022-01-16T23:31:07,418 [3/7] /usr/local/cuda/bin/nvcc --generate-dependencies-with-compile --dependency-output /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/cuda/quiver_feature.o.d -I/tmp/pip-req-build-wc0b161h/./srcs/cpp/include -I/usr/local/cuda/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/TH -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/THC -I/home/user/miniconda3/envs/env/include/python3.8 -c -c /tmp/pip-req-build-wc0b161h/srcs/cpp/src/quiver/cuda/quiver_feature.cu -o /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/cuda/quiver_feature.o -D__CUDA_NO_HALF_OPERATORS
-D__CUDA_NO_HALF_CONVERSIONS
_ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 --expt-extended-lambda -lnuma -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=torch_quiver -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_61,code=compute_61 -gencode=arch=compute_61,code=sm_61 -std=c++14
2022-01-16T23:31:07,418 FAILED: /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/cuda/quiver_feature.o
2022-01-16T23:31:07,418 /usr/local/cuda/bin/nvcc --generate-dependencies-with-compile --dependency-output /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/cuda/quiver_feature.o.d -I/tmp/pip-req-build-wc0b161h/./srcs/cpp/include -I/usr/local/cuda/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/TH -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/THC -I/home/user/miniconda3/envs/env/include/python3.8 -c -c /tmp/pip-req-build-wc0b161h/srcs/cpp/src/quiver/cuda/quiver_feature.cu -o /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/cuda/quiver_feature.o -D__CUDA_NO_HALF_OPERATORS
-D__CUDA_NO_HALF_CONVERSIONS
_ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 --expt-extended-lambda -lnuma -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=torch_quiver -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_61,code=compute_61 -gencode=arch=compute_61,code=sm_61 -std=c++14
2022-01-16T23:31:07,418 nvcc fatal : Unknown option '-generate-dependencies-with-compile'
2022-01-16T23:31:07,591 [4/7] c++ -MMD -MF /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/trace.o.d -pthread -B /home/user/miniconda3/envs/env/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/tmp/pip-req-build-wc0b161h/./srcs/cpp/include -I/usr/local/cuda/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/TH -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/THC -I/home/user/miniconda3/envs/env/include/python3.8 -c -c /tmp/pip-req-build-wc0b161h/srcs/cpp/src/quiver/trace.cpp -o /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/trace.o -std=c++17 -DHAVE_CUDA -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=torch_quiver -D_GLIBCXX_USE_CXX11_ABI=0
2022-01-16T23:31:07,591 cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
2022-01-16T23:31:16,757 [5/7] c++ -MMD -MF /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/quiver.o.d -pthread -B /home/user/miniconda3/envs/env/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/tmp/pip-req-build-wc0b161h/./srcs/cpp/include -I/usr/local/cuda/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/TH -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/THC -I/home/user/miniconda3/envs/env/include/python3.8 -c -c /tmp/pip-req-build-wc0b161h/srcs/cpp/src/quiver/quiver.cpp -o /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/quiver.o -std=c++17 -DHAVE_CUDA -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=torch_quiver -D_GLIBCXX_USE_CXX11_ABI=0
2022-01-16T23:31:16,757 FAILED: /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/quiver.o
2022-01-16T23:31:16,757 c++ -MMD -MF /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/quiver.o.d -pthread -B /home/user/miniconda3/envs/env/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/tmp/pip-req-build-wc0b161h/./srcs/cpp/include -I/usr/local/cuda/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/TH -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/THC -I/home/user/miniconda3/envs/env/include/python3.8 -c -c /tmp/pip-req-build-wc0b161h/srcs/cpp/src/quiver/quiver.cpp -o /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/quiver.o -std=c++17 -DHAVE_CUDA -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=torch_quiver -D_GLIBCXX_USE_CXX11_ABI=0
2022-01-16T23:31:16,757 cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
2022-01-16T23:31:16,757 In file included from /home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/ATen/Parallel.h:140:0,
2022-01-16T23:31:16,757 from /tmp/pip-req-build-wc0b161h/./srcs/cpp/include/quiver/quiver.cpu.hpp:8,
2022-01-16T23:31:16,757 from /tmp/pip-req-build-wc0b161h/srcs/cpp/src/quiver/quiver.cpp:3:
2022-01-16T23:31:16,757 /home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/ATen/ParallelOpenMP.h:83:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
2022-01-16T23:31:16,758 #pragma omp parallel for if ((end - begin) >= grain_size)
2022-01-16T23:31:16,758 ^
2022-01-16T23:31:16,758 In file included from /tmp/pip-req-build-wc0b161h/srcs/cpp/src/quiver/quiver.cpp:3:0:
2022-01-16T23:31:16,758 /tmp/pip-req-build-wc0b161h/./srcs/cpp/include/quiver/quiver.cpu.hpp: In function ‘N quiver::safe_sample(const T*, const T*, N, T*)’:
2022-01-16T23:31:16,758 /tmp/pip-req-build-wc0b161h/./srcs/cpp/include/quiver/quiver.cpu.hpp:21:9: error: ‘sample’ is not a member of ‘std’
2022-01-16T23:31:16,758 std::sample(begin, end, outputs, k, g);
2022-01-16T23:31:16,758 ^
2022-01-16T23:31:16,758 /tmp/pip-req-build-wc0b161h/./srcs/cpp/include/quiver/quiver.cpu.hpp: In constructor ‘quiver::quiver<T, (quiver::device_t)0u>::quiver(T, std::vector<std::pair<_FIter, FIter> >)’:
2022-01-16T23:31:16,758 /tmp/pip-req-build-wc0b161h/./srcs/cpp/include/quiver/quiver.cpu.hpp:41:20: error: expected unqualified-id before ‘[’ token
2022-01-16T23:31:16,758 const auto [row_idx, col_idx] = unzip(edge_index);
2022-01-16T23:31:16,758 ^
2022-01-16T23:31:16,758 /tmp/pip-req-build-wc0b161h/./srcs/cpp/include/quiver/quiver.cpu.hpp:42:54: error: ‘row_idx’ was not declared in this scope
2022-01-16T23:31:16,758 std::vector row_ptr = compress_row_idx(n, row_idx);
2022-01-16T23:31:16,758 ^
2022-01-16T23:31:16,758 /tmp/pip-req-build-wc0b161h/./srcs/cpp/include/quiver/quiver.cpu.hpp:44:19: error: ‘col_idx’ was not declared in this scope
2022-01-16T23:31:16,759 std::copy(col_idx.begin(), col_idx.end(), col_idx
.begin());
2022-01-16T23:31:16,759 ^
2022-01-16T23:31:18,435 [6/7] c++ -MMD -MF /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/cpu/tensor.o.d -pthread -B /home/user/miniconda3/envs/env/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/tmp/pip-req-build-wc0b161h/./srcs/cpp/include -I/usr/local/cuda/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/TH -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/THC -I/home/user/miniconda3/envs/env/include/python3.8 -c -c /tmp/pip-req-build-wc0b161h/srcs/cpp/src/quiver/cpu/tensor.cpp -o /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/cpu/tensor.o -std=c++17 -DHAVE_CUDA -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=torch_quiver -D_GLIBCXX_USE_CXX11_ABI=0
2022-01-16T23:31:18,435 cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
2022-01-16T23:31:18,436 In file included from /home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/ATen/Parallel.h:140:0,
2022-01-16T23:31:18,436 from /home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
2022-01-16T23:31:18,436 from /home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
2022-01-16T23:31:18,436 from /home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
2022-01-16T23:31:18,436 from /home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/all.h:13,
2022-01-16T23:31:18,436 from /home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/torch/extension.h:4,
2022-01-16T23:31:18,436 from /tmp/pip-req-build-wc0b161h/srcs/cpp/src/quiver/cpu/tensor.cpp:10:
2022-01-16T23:31:18,436 /home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/ATen/ParallelOpenMP.h:83:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
2022-01-16T23:31:18,436 #pragma omp parallel for if ((end - begin) >= grain_size)
2022-01-16T23:31:18,436 ^

2022-01-16T23:31:22,532 [7/7] c++ -MMD -MF /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/torch/module.o.d -pthread -B /home/user/miniconda3/envs/env/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/tmp/pip-req-build-wc0b161h/./srcs/cpp/include -I/usr/local/cuda/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/TH -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/THC -I/home/user/miniconda3/envs/env/include/python3.8 -c -c /tmp/pip-req-build-wc0b161h/srcs/cpp/src/quiver/torch/module.cpp -o /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/torch/module.o -std=c++17 -DHAVE_CUDA -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=torch_quiver -D_GLIBCXX_USE_CXX11_ABI=0
2022-01-16T23:31:22,532 cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
2022-01-16T23:31:22,532 In file included from /home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/ATen/Parallel.h:140:0,
2022-01-16T23:31:22,532 from /home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
2022-01-16T23:31:22,532 from /home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
2022-01-16T23:31:22,532 from /home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
2022-01-16T23:31:22,532 from /home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/all.h:13,
2022-01-16T23:31:22,532 from /home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/torch/extension.h:4,
2022-01-16T23:31:22,533 from /tmp/pip-req-build-wc0b161h/srcs/cpp/src/quiver/torch/module.cpp:1:
2022-01-16T23:31:22,533 /home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/ATen/ParallelOpenMP.h:83:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
2022-01-16T23:31:22,533 #pragma omp parallel for if ((end - begin) >= grain_size)
2022-01-16T23:31:22,533 ^
2022-01-16T23:31:22,533 ninja: build stopped: subcommand failed.
2022-01-16T23:31:22,533 Traceback (most recent call last):
2022-01-16T23:31:22,533 File "/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1667, in _run_ninja_build
2022-01-16T23:31:22,534 subprocess.run(
2022-01-16T23:31:22,534 File "/home/user/miniconda3/envs/env/lib/python3.8/subprocess.py", line 512, in run
2022-01-16T23:31:22,534 raise CalledProcessError(retcode, process.args,
2022-01-16T23:31:22,534 subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

2022-01-16T23:31:22,534 The above exception was the direct cause of the following exception:

2022-01-16T23:31:22,535 Traceback (most recent call last):
2022-01-16T23:31:22,535 File "", line 1, in
2022-01-16T23:31:22,535 File "/tmp/pip-req-build-wc0b161h/setup.py", line 64, in
2022-01-16T23:31:22,535 setup(
2022-01-16T23:31:22,535 File "/home/user/miniconda3/envs/env/lib/python3.8/site-packages/setuptools/init.py", line 153, in setup
2022-01-16T23:31:22,535 return distutils.core.setup(attrs)
2022-01-16T23:31:22,535 File "/home/user/miniconda3/envs/env/lib/python3.8/distutils/core.py", line 148, in setup
2022-01-16T23:31:22,535 dist.run_commands()
2022-01-16T23:31:22,535 File "/home/user/miniconda3/envs/env/lib/python3.8/distutils/dist.py", line 966, in run_commands
2022-01-16T23:31:22,535 self.run_command(cmd)
2022-01-16T23:31:22,535 File "/home/user/miniconda3/envs/env/lib/python3.8/distutils/dist.py", line 985, in run_command
2022-01-16T23:31:22,536 cmd_obj.run()
2022-01-16T23:31:22,536 File "/home/user/miniconda3/envs/env/lib/python3.8/site-packages/wheel/bdist_wheel.py", line 299, in run
2022-01-16T23:31:22,536 self.run_command('build')
2022-01-16T23:31:22,536 File "/home/user/miniconda3/envs/env/lib/python3.8/distutils/cmd.py", line 313, in run_command
2022-01-16T23:31:22,536 self.distribution.run_command(command)
2022-01-16T23:31:22,536 File "/home/user/miniconda3/envs/env/lib/python3.8/distutils/dist.py", line 985, in run_command
2022-01-16T23:31:22,537 cmd_obj.run()
2022-01-16T23:31:22,537 File "/home/user/miniconda3/envs/env/lib/python3.8/distutils/command/build.py", line 135, in run
2022-01-16T23:31:22,537 self.run_command(cmd_name)
2022-01-16T23:31:22,537 File "/home/user/miniconda3/envs/env/lib/python3.8/distutils/cmd.py", line 313, in run_command
2022-01-16T23:31:22,537 self.distribution.run_command(command)
2022-01-16T23:31:22,537 File "/home/user/miniconda3/envs/env/lib/python3.8/distutils/dist.py", line 985, in run_command
2022-01-16T23:31:22,537 cmd_obj.run()
2022-01-16T23:31:22,537 File "/home/user/miniconda3/envs/env/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 79, in run
2022-01-16T23:31:22,537 build_ext.run(self)
2022-01-16T23:31:22,537 File "/home/user/miniconda3/envs/env/lib/python3.8/distutils/command/build_ext.py", line 340, in run
2022-01-16T23:31:22,538 self.build_extensions()
2022-01-16T23:31:22,538 File "/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 708, in build_extensions
2022-01-16T23:31:22,538 build_ext.build_extensions(self)
2022-01-16T23:31:22,538 File "/home/user/miniconda3/envs/env/lib/python3.8/distutils/command/build_ext.py", line 449, in build_extensions
2022-01-16T23:31:22,538 self.build_extensions_serial()
2022-01-16T23:31:22,538 File "/home/user/miniconda3/envs/env/lib/python3.8/distutils/command/build_ext.py", line 474, in build_extensions_serial
2022-01-16T23:31:22,538 self.build_extension(ext)
2022-01-16T23:31:22,539 File "/home/user/miniconda3/envs/env/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 196, in build_extension
2022-01-16T23:31:22,539 build_ext.build_extension(self, ext)
2022-01-16T23:31:22,539 File "/home/user/miniconda3/envs/env/lib/python3.8/distutils/command/build_ext.py", line 528, in build_extension
2022-01-16T23:31:22,539 objects = self.compiler.compile(sources,
2022-01-16T23:31:22,539 File "/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 529, in unix_wrap_ninja_compile
2022-01-16T23:31:22,539 write_ninja_file_and_compile_objects(
2022-01-16T23:31:22,539 File "/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1354, in write_ninja_file_and_compile_objects
2022-01-16T23:31:22,540 run_ninja_build(
2022-01-16T23:31:22,540 File "/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1683, in run_ninja_build
2022-01-16T23:31:22,540 raise RuntimeError(message) from e
2022-01-16T23:31:22,541 RuntimeError: Error compiling objects for extension
2022-01-16T23:31:22,718 ERROR: Failed building wheel for torch-quiver
2022-01-16T23:31:22,719 Running setup.py clean for torch-quiver
2022-01-16T23:31:22,719 Running command /home/user/miniconda3/envs/env/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-wc0b161h/setup.py'"'"'; file='"'"'/tmp/pip-req-build-wc0b161h/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' clean --all
2022-01-16T23:31:23,610 /home/user/miniconda3/envs/env/lib/python3.8/distutils/extension.py:131: UserWarning: Unknown Extension options: 'with_cuda'
2022-01-16T23:31:23,611 warnings.warn(msg)
2022-01-16T23:31:23,628 running clean
2022-01-16T23:31:23,628 removing 'build/temp.linux-x86_64-3.8' (and everything under it)
2022-01-16T23:31:23,631 removing 'build/lib.linux-x86_64-3.8' (and everything under it)
2022-01-16T23:31:23,631 'build/bdist.linux-x86_64' does not exist -- can't clean it
2022-01-16T23:31:23,631 'build/scripts-3.8' does not exist -- can't clean it
2022-01-16T23:31:23,631 removing 'build'
2022-01-16T23:31:23,785 Failed to build torch-quiver
2022-01-16T23:31:24,153 Installing collected packages: torch-quiver
2022-01-16T23:31:24,154 Created temporary directory: /tmp/pip-record-psoiog9y
2022-01-16T23:31:24,154 Running command /home/user/miniconda3/envs/env/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-wc0b161h/setup.py'"'"'; file='"'"'/tmp/pip-req-build-wc0b161h/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record /tmp/pip-record-psoiog9y/install-record.txt --single-version-externally-managed --compile --install-headers /home/user/miniconda3/envs/env/include/python3.8/torch-quiver
2022-01-16T23:31:25,061 /home/user/miniconda3/envs/env/lib/python3.8/distutils/extension.py:131: UserWarning: Unknown Extension options: 'with_cuda'
2022-01-16T23:31:25,062 warnings.warn(msg)
2022-01-16T23:31:25,079 running install
2022-01-16T23:31:25,079 running build
2022-01-16T23:31:25,079 running build_py
2022-01-16T23:31:25,091 creating build
2022-01-16T23:31:25,091 creating build/lib.linux-x86_64-3.8
2022-01-16T23:31:25,092 creating build/lib.linux-x86_64-3.8/quiver
2022-01-16T23:31:25,092 copying ./srcs/python/quiver/comm.py -> build/lib.linux-x86_64-3.8/quiver
2022-01-16T23:31:25,092 copying ./srcs/python/quiver/init.py -> build/lib.linux-x86_64-3.8/quiver
2022-01-16T23:31:25,092 copying ./srcs/python/quiver/shard_tensor.py -> build/lib.linux-x86_64-3.8/quiver
2022-01-16T23:31:25,092 copying ./srcs/python/quiver/partition.py -> build/lib.linux-x86_64-3.8/quiver
2022-01-16T23:31:25,092 copying ./srcs/python/quiver/feature.py -> build/lib.linux-x86_64-3.8/quiver
2022-01-16T23:31:25,092 copying ./srcs/python/quiver/utils.py -> build/lib.linux-x86_64-3.8/quiver
2022-01-16T23:31:25,092 copying ./srcs/python/quiver/async_cuda_sampler.py -> build/lib.linux-x86_64-3.8/quiver
2022-01-16T23:31:25,092 creating build/lib.linux-x86_64-3.8/quiver/multiprocessing
2022-01-16T23:31:25,093 copying ./srcs/python/quiver/multiprocessing/init.py -> build/lib.linux-x86_64-3.8/quiver/multiprocessing
2022-01-16T23:31:25,093 copying ./srcs/python/quiver/multiprocessing/reductions.py -> build/lib.linux-x86_64-3.8/quiver/multiprocessing
2022-01-16T23:31:25,093 creating build/lib.linux-x86_64-3.8/quiver/pyg
2022-01-16T23:31:25,093 copying ./srcs/python/quiver/pyg/init.py -> build/lib.linux-x86_64-3.8/quiver/pyg
2022-01-16T23:31:25,093 copying ./srcs/python/quiver/pyg/sage_sampler.py -> build/lib.linux-x86_64-3.8/quiver/pyg
2022-01-16T23:31:25,093 running build_ext
2022-01-16T23:31:25,118 building 'torch_quiver' extension
2022-01-16T23:31:25,118 creating /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8
2022-01-16T23:31:25,118 creating /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs
2022-01-16T23:31:25,118 creating /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp
2022-01-16T23:31:25,118 creating /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src
2022-01-16T23:31:25,118 creating /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver
2022-01-16T23:31:25,119 creating /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/cpu
2022-01-16T23:31:25,119 creating /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/torch
2022-01-16T23:31:25,119 creating /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/cuda
2022-01-16T23:31:25,149 Emitting ninja build file /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/build.ninja...
2022-01-16T23:31:25,150 Compiling objects...
2022-01-16T23:31:25,150 Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
2022-01-16T23:31:25,162 [1/7] /usr/local/cuda/bin/nvcc --generate-dependencies-with-compile --dependency-output /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/cuda/quiver_sample.o.d -I/tmp/pip-req-build-wc0b161h/./srcs/cpp/include -I/usr/local/cuda/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/TH -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/THC -I/home/user/miniconda3/envs/env/include/python3.8 -c -c /tmp/pip-req-build-wc0b161h/srcs/cpp/src/quiver/cuda/quiver_sample.cu -o /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/cuda/quiver_sample.o -D__CUDA_NO_HALF_OPERATORS
-D__CUDA_NO_HALF_CONVERSIONS
-D__CUDA_NO_BFLOAT16_CONVERSIONS
-D__CUDA_NO_HALF2_OPERATORS
--expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 --expt-extended-lambda -lnuma -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=torch_quiver -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_61,code=compute_61 -gencode=arch=compute_61,code=sm_61 -std=c++14
2022-01-16T23:31:25,162 FAILED: /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/cuda/quiver_sample.o
2022-01-16T23:31:25,163 /usr/local/cuda/bin/nvcc --generate-dependencies-with-compile --dependency-output /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/cuda/quiver_sample.o.d -I/tmp/pip-req-build-wc0b161h/./srcs/cpp/include -I/usr/local/cuda/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/TH -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/THC -I/home/user/miniconda3/envs/env/include/python3.8 -c -c /tmp/pip-req-build-wc0b161h/srcs/cpp/src/quiver/cuda/quiver_sample.cu -o /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/cuda/quiver_sample.o -D__CUDA_NO_HALF_OPERATORS
-D__CUDA_NO_HALF_CONVERSIONS
_ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 --expt-extended-lambda -lnuma -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=torch_quiver -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_61,code=compute_61 -gencode=arch=compute_61,code=sm_61 -std=c++14
2022-01-16T23:31:25,163 nvcc fatal : Unknown option '-generate-dependencies-with-compile'
2022-01-16T23:31:25,163 [2/7] /usr/local/cuda/bin/nvcc --generate-dependencies-with-compile --dependency-output /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/cuda/quiver_comm.o.d -I/tmp/pip-req-build-wc0b161h/./srcs/cpp/include -I/usr/local/cuda/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/TH -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/THC -I/home/user/miniconda3/envs/env/include/python3.8 -c -c /tmp/pip-req-build-wc0b161h/srcs/cpp/src/quiver/cuda/quiver_comm.cu -o /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/cuda/quiver_comm.o -D__CUDA_NO_HALF_OPERATORS
-D__CUDA_NO_HALF_CONVERSIONS
_ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 --expt-extended-lambda -lnuma -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=torch_quiver -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_61,code=compute_61 -gencode=arch=compute_61,code=sm_61 -std=c++14
2022-01-16T23:31:25,163 FAILED: /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/cuda/quiver_comm.o
2022-01-16T23:31:25,163 /usr/local/cuda/bin/nvcc --generate-dependencies-with-compile --dependency-output /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/cuda/quiver_comm.o.d -I/tmp/pip-req-build-wc0b161h/./srcs/cpp/include -I/usr/local/cuda/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/TH -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/THC -I/home/user/miniconda3/envs/env/include/python3.8 -c -c /tmp/pip-req-build-wc0b161h/srcs/cpp/src/quiver/cuda/quiver_comm.cu -o /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/cuda/quiver_comm.o -D__CUDA_NO_HALF_OPERATORS
-D__CUDA_NO_HALF_CONVERSIONS
_ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 --expt-extended-lambda -lnuma -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=torch_quiver -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_61,code=compute_61 -gencode=arch=compute_61,code=sm_61 -std=c++14
2022-01-16T23:31:25,163 nvcc fatal : Unknown option '-generate-dependencies-with-compile'
2022-01-16T23:31:25,163 [3/7] /usr/local/cuda/bin/nvcc --generate-dependencies-with-compile --dependency-output /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/cuda/quiver_feature.o.d -I/tmp/pip-req-build-wc0b161h/./srcs/cpp/include -I/usr/local/cuda/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/TH -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/THC -I/home/user/miniconda3/envs/env/include/python3.8 -c -c /tmp/pip-req-build-wc0b161h/srcs/cpp/src/quiver/cuda/quiver_feature.cu -o /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/cuda/quiver_feature.o -D__CUDA_NO_HALF_OPERATORS
-D__CUDA_NO_HALF_CONVERSIONS
_ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 --expt-extended-lambda -lnuma -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=torch_quiver -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_61,code=compute_61 -gencode=arch=compute_61,code=sm_61 -std=c++14
2022-01-16T23:31:25,163 FAILED: /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/cuda/quiver_feature.o
2022-01-16T23:31:25,164 /usr/local/cuda/bin/nvcc --generate-dependencies-with-compile --dependency-output /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/cuda/quiver_feature.o.d -I/tmp/pip-req-build-wc0b161h/./srcs/cpp/include -I/usr/local/cuda/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/TH -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/THC -I/home/user/miniconda3/envs/env/include/python3.8 -c -c /tmp/pip-req-build-wc0b161h/srcs/cpp/src/quiver/cuda/quiver_feature.cu -o /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/cuda/quiver_feature.o -D__CUDA_NO_HALF_OPERATORS
-D__CUDA_NO_HALF_CONVERSIONS
_ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 --expt-extended-lambda -lnuma -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=torch_quiver -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_61,code=compute_61 -gencode=arch=compute_61,code=sm_61 -std=c++14
2022-01-16T23:31:25,164 nvcc fatal : Unknown option '-generate-dependencies-with-compile'
2022-01-16T23:31:25,344 [4/7] c++ -MMD -MF /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/trace.o.d -pthread -B /home/user/miniconda3/envs/env/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/tmp/pip-req-build-wc0b161h/./srcs/cpp/include -I/usr/local/cuda/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/TH -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/THC -I/home/user/miniconda3/envs/env/include/python3.8 -c -c /tmp/pip-req-build-wc0b161h/srcs/cpp/src/quiver/trace.cpp -o /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/trace.o -std=c++17 -DHAVE_CUDA -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=torch_quiver -D_GLIBCXX_USE_CXX11_ABI=0
2022-01-16T23:31:25,345 cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
2022-01-16T23:31:34,346 [5/7] c++ -MMD -MF /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/quiver.o.d -pthread -B /home/user/miniconda3/envs/env/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/tmp/pip-req-build-wc0b161h/./srcs/cpp/include -I/usr/local/cuda/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/TH -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/THC -I/home/user/miniconda3/envs/env/include/python3.8 -c -c /tmp/pip-req-build-wc0b161h/srcs/cpp/src/quiver/quiver.cpp -o /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/quiver.o -std=c++17 -DHAVE_CUDA -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=torch_quiver -D_GLIBCXX_USE_CXX11_ABI=0
2022-01-16T23:31:34,346 FAILED: /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/quiver.o
2022-01-16T23:31:34,346 c++ -MMD -MF /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/quiver.o.d -pthread -B /home/user/miniconda3/envs/env/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/tmp/pip-req-build-wc0b161h/./srcs/cpp/include -I/usr/local/cuda/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/TH -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/THC -I/home/user/miniconda3/envs/env/include/python3.8 -c -c /tmp/pip-req-build-wc0b161h/srcs/cpp/src/quiver/quiver.cpp -o /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/quiver.o -std=c++17 -DHAVE_CUDA -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=torch_quiver -D_GLIBCXX_USE_CXX11_ABI=0
2022-01-16T23:31:34,347 cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
2022-01-16T23:31:34,347 In file included from /home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/ATen/Parallel.h:140:0,
2022-01-16T23:31:34,347 from /tmp/pip-req-build-wc0b161h/./srcs/cpp/include/quiver/quiver.cpu.hpp:8,
2022-01-16T23:31:34,347 from /tmp/pip-req-build-wc0b161h/srcs/cpp/src/quiver/quiver.cpp:3:
2022-01-16T23:31:34,347 /home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/ATen/ParallelOpenMP.h:83:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
2022-01-16T23:31:34,347 #pragma omp parallel for if ((end - begin) >= grain_size)
2022-01-16T23:31:34,347 ^
2022-01-16T23:31:34,347 In file included from /tmp/pip-req-build-wc0b161h/srcs/cpp/src/quiver/quiver.cpp:3:0:
2022-01-16T23:31:34,348 /tmp/pip-req-build-wc0b161h/./srcs/cpp/include/quiver/quiver.cpu.hpp: In function ‘N quiver::safe_sample(const T
, const T
, N, T*)’:
2022-01-16T23:31:34,348 /tmp/pip-req-build-wc0b161h/./srcs/cpp/include/quiver/quiver.cpu.hpp:21:9: error: ‘sample’ is not a member of ‘std’
2022-01-16T23:31:34,348 std::sample(begin, end, outputs, k, g);
2022-01-16T23:31:34,348 ^
2022-01-16T23:31:34,348 /tmp/pip-req-build-wc0b161h/./srcs/cpp/include/quiver/quiver.cpu.hpp: In constructor ‘quiver::quiver<T, (quiver::device_t)0u>::quiver(T, std::vector<std::pair<_FIter, FIter> >)’:
2022-01-16T23:31:34,348 /tmp/pip-req-build-wc0b161h/./srcs/cpp/include/quiver/quiver.cpu.hpp:41:20: error: expected unqualified-id before ‘[’ token
2022-01-16T23:31:34,348 const auto [row_idx, col_idx] = unzip(edge_index);
2022-01-16T23:31:34,348 ^
2022-01-16T23:31:34,348 /tmp/pip-req-build-wc0b161h/./srcs/cpp/include/quiver/quiver.cpu.hpp:42:54: error: ‘row_idx’ was not declared in this scope
2022-01-16T23:31:34,349 std::vector row_ptr = compress_row_idx(n, row_idx);
2022-01-16T23:31:34,349 ^
2022-01-16T23:31:34,349 /tmp/pip-req-build-wc0b161h/./srcs/cpp/include/quiver/quiver.cpu.hpp:44:19: error: ‘col_idx’ was not declared in this scope
2022-01-16T23:31:34,349 std::copy(col_idx.begin(), col_idx.end(), col_idx
.begin());
2022-01-16T23:31:34,349 ^
2022-01-16T23:31:35,688 [6/7] c++ -MMD -MF /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/cpu/tensor.o.d -pthread -B /home/user/miniconda3/envs/env/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/tmp/pip-req-build-wc0b161h/./srcs/cpp/include -I/usr/local/cuda/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/TH -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/THC -I/home/user/miniconda3/envs/env/include/python3.8 -c -c /tmp/pip-req-build-wc0b161h/srcs/cpp/src/quiver/cpu/tensor.cpp -o /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/cpu/tensor.o -std=c++17 -DHAVE_CUDA -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=torch_quiver -D_GLIBCXX_USE_CXX11_ABI=0
2022-01-16T23:31:35,688 cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
2022-01-16T23:31:35,688 In file included from /home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/ATen/Parallel.h:140:0,
2022-01-16T23:31:35,688 from /home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
2022-01-16T23:31:35,689 from /home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
2022-01-16T23:31:35,689 from /home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
2022-01-16T23:31:35,689 from /home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/all.h:13,
2022-01-16T23:31:35,689 from /home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/torch/extension.h:4,
2022-01-16T23:31:35,689 from /tmp/pip-req-build-wc0b161h/srcs/cpp/src/quiver/cpu/tensor.cpp:10:
2022-01-16T23:31:35,689 /home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/ATen/ParallelOpenMP.h:83:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
2022-01-16T23:31:35,689 #pragma omp parallel for if ((end - begin) >= grain_size)
2022-01-16T23:31:35,689 ^
2022-01-16T23:31:39,502 [7/7] c++ -MMD -MF /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/torch/module.o.d -pthread -B /home/user/miniconda3/envs/env/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/tmp/pip-req-build-wc0b161h/./srcs/cpp/include -I/usr/local/cuda/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/TH -I/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/THC -I/home/user/miniconda3/envs/env/include/python3.8 -c -c /tmp/pip-req-build-wc0b161h/srcs/cpp/src/quiver/torch/module.cpp -o /tmp/pip-req-build-wc0b161h/build/temp.linux-x86_64-3.8/srcs/cpp/src/quiver/torch/module.o -std=c++17 -DHAVE_CUDA -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=torch_quiver -D_GLIBCXX_USE_CXX11_ABI=0
2022-01-16T23:31:39,503 cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
2022-01-16T23:31:39,503 In file included from /home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/ATen/Parallel.h:140:0,
2022-01-16T23:31:39,503 from /home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
2022-01-16T23:31:39,503 from /home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
2022-01-16T23:31:39,503 from /home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
2022-01-16T23:31:39,504 from /home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/all.h:13,
2022-01-16T23:31:39,504 from /home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/torch/extension.h:4,
2022-01-16T23:31:39,504 from /tmp/pip-req-build-wc0b161h/srcs/cpp/src/quiver/torch/module.cpp:1:
2022-01-16T23:31:39,504 /home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/include/ATen/ParallelOpenMP.h:83:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
2022-01-16T23:31:39,504 #pragma omp parallel for if ((end - begin) >= grain_size)
2022-01-16T23:31:39,504 ^
2022-01-16T23:31:39,504 ninja: build stopped: subcommand failed.
2022-01-16T23:31:39,504 Traceback (most recent call last):
2022-01-16T23:31:39,504 File "/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1667, in _run_ninja_build
2022-01-16T23:31:39,504 subprocess.run(
2022-01-16T23:31:39,505 File "/home/user/miniconda3/envs/env/lib/python3.8/subprocess.py", line 512, in run
2022-01-16T23:31:39,505 raise CalledProcessError(retcode, process.args,
2022-01-16T23:31:39,505 subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

2022-01-16T23:31:39,505 The above exception was the direct cause of the following exception:

2022-01-16T23:31:39,505 Traceback (most recent call last):
2022-01-16T23:31:39,506 File "", line 1, in
2022-01-16T23:31:39,506 File "/tmp/pip-req-build-wc0b161h/setup.py", line 64, in
2022-01-16T23:31:39,506 setup(
2022-01-16T23:31:39,506 File "/home/user/miniconda3/envs/env/lib/python3.8/site-packages/setuptools/init.py", line 153, in setup
2022-01-16T23:31:39,506 return distutils.core.setup(**attrs)
2022-01-16T23:31:39,506 File "/home/user/miniconda3/envs/env/lib/python3.8/distutils/core.py", line 148, in setup
2022-01-16T23:31:39,506 dist.run_commands()
2022-01-16T23:31:39,506 File "/home/user/miniconda3/envs/env/lib/python3.8/distutils/dist.py", line 966, in run_commands
2022-01-16T23:31:39,506 self.run_command(cmd)
2022-01-16T23:31:39,506 File "/home/user/miniconda3/envs/env/lib/python3.8/distutils/dist.py", line 985, in run_command
2022-01-16T23:31:39,507 cmd_obj.run()
2022-01-16T23:31:39,507 File "/home/user/miniconda3/envs/env/lib/python3.8/site-packages/setuptools/command/install.py", line 61, in run
2022-01-16T23:31:39,507 return orig.install.run(self)
2022-01-16T23:31:39,507 File "/home/user/miniconda3/envs/env/lib/python3.8/distutils/command/install.py", line 545, in run
2022-01-16T23:31:39,507 self.run_command('build')
2022-01-16T23:31:39,507 File "/home/user/miniconda3/envs/env/lib/python3.8/distutils/cmd.py", line 313, in run_command
2022-01-16T23:31:39,507 self.distribution.run_command(command)
2022-01-16T23:31:39,507 File "/home/user/miniconda3/envs/env/lib/python3.8/distutils/dist.py", line 985, in run_command
2022-01-16T23:31:39,507 cmd_obj.run()
2022-01-16T23:31:39,508 File "/home/user/miniconda3/envs/env/lib/python3.8/distutils/command/build.py", line 135, in run
2022-01-16T23:31:39,508 self.run_command(cmd_name)
2022-01-16T23:31:39,508 File "/home/user/miniconda3/envs/env/lib/python3.8/distutils/cmd.py", line 313, in run_command
2022-01-16T23:31:39,508 self.distribution.run_command(command)
2022-01-16T23:31:39,508 File "/home/user/miniconda3/envs/env/lib/python3.8/distutils/dist.py", line 985, in run_command
2022-01-16T23:31:39,508 cmd_obj.run()
2022-01-16T23:31:39,508 File "/home/user/miniconda3/envs/env/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 79, in run
2022-01-16T23:31:39,508 _build_ext.run(self)
2022-01-16T23:31:39,508 File "/home/user/miniconda3/envs/env/lib/python3.8/distutils/command/build_ext.py", line 340, in run
2022-01-16T23:31:39,509 self.build_extensions()
2022-01-16T23:31:39,509 File "/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 708, in build_extensions
2022-01-16T23:31:39,509 build_ext.build_extensions(self)
2022-01-16T23:31:39,509 File "/home/user/miniconda3/envs/env/lib/python3.8/distutils/command/build_ext.py", line 449, in build_extensions
2022-01-16T23:31:39,509 self._build_extensions_serial()
2022-01-16T23:31:39,509 File "/home/user/miniconda3/envs/env/lib/python3.8/distutils/command/build_ext.py", line 474, in _build_extensions_serial
2022-01-16T23:31:39,509 self.build_extension(ext)
2022-01-16T23:31:39,509 File "/home/user/miniconda3/envs/env/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 196, in build_extension
2022-01-16T23:31:39,509 _build_ext.build_extension(self, ext)
2022-01-16T23:31:39,510 File "/home/user/miniconda3/envs/env/lib/python3.8/distutils/command/build_ext.py", line 528, in build_extension
2022-01-16T23:31:39,510 objects = self.compiler.compile(sources,
2022-01-16T23:31:39,510 File "/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 529, in unix_wrap_ninja_compile
2022-01-16T23:31:39,510 _write_ninja_file_and_compile_objects(
2022-01-16T23:31:39,510 File "/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1354, in _write_ninja_file_and_compile_objects
2022-01-16T23:31:39,510 _run_ninja_build(
2022-01-16T23:31:39,511 File "/home/user/miniconda3/envs/env/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1683, in _run_ninja_build
2022-01-16T23:31:39,511 raise RuntimeError(message) from e
2022-01-16T23:31:39,511 RuntimeError: Error compiling objects for extension
2022-01-16T23:31:39,724 ERROR: Command errored out with exit status 1: /home/user/miniconda3/envs/env/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-wc0b161h/setup.py'"'"'; file='"'"'/tmp/pip-req-build-wc0b161h/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record /tmp/pip-record-psoiog9y/install-record.txt --single-version-externally-managed --compile --install-headers /home/user/miniconda3/envs/env/include/python3.8/torch-quiver Check the logs for full command output.
2022-01-16T23:31:39,724 Exception information:
2022-01-16T23:31:39,724 Traceback (most recent call last):
2022-01-16T23:31:39,724 File "/home/user/miniconda3/envs/env/lib/python3.8/site-packages/pip/_internal/req/req_install.py", line 840, in install
2022-01-16T23:31:39,724 success = install_legacy(
2022-01-16T23:31:39,724 File "/home/user/miniconda3/envs/env/lib/python3.8/site-packages/pip/_internal/operations/install/legacy.py", line 86, in install
2022-01-16T23:31:39,724 raise LegacyInstallFailure
2022-01-16T23:31:39,724 pip._internal.operations.install.legacy.LegacyInstallFailure
2022-01-16T23:31:39,724
2022-01-16T23:31:39,724 During handling of the above exception, another exception occurred:
2022-01-16T23:31:39,724
2022-01-16T23:31:39,724 Traceback (most recent call last):
2022-01-16T23:31:39,724 File "/home/user/miniconda3/envs/env/lib/python3.8/site-packages/pip/_internal/cli/base_command.py", line 224, in _main
2022-01-16T23:31:39,724 status = self.run(options, args)
2022-01-16T23:31:39,724 File "/home/user/miniconda3/envs/env/lib/python3.8/site-packages/pip/_internal/cli/req_command.py", line 180, in wrapper
2022-01-16T23:31:39,724 return func(self, options, args)
2022-01-16T23:31:39,724 File "/home/user/miniconda3/envs/env/lib/python3.8/site-packages/pip/_internal/commands/install.py", line 394, in run
2022-01-16T23:31:39,724 installed = install_given_reqs(
2022-01-16T23:31:39,724 File "/home/user/miniconda3/envs/env/lib/python3.8/site-packages/pip/_internal/req/init.py", line 82, in install_given_reqs
2022-01-16T23:31:39,724 requirement.install(
2022-01-16T23:31:39,724 File "/home/user/miniconda3/envs/env/lib/python3.8/site-packages/pip/_internal/req/req_install.py", line 858, in install
2022-01-16T23:31:39,724 six.reraise(*exc.parent)
2022-01-16T23:31:39,724 File "/home/user/miniconda3/envs/env/lib/python3.8/site-packages/pip/_vendor/six.py", line 703, in reraise
2022-01-16T23:31:39,724 raise value
2022-01-16T23:31:39,724 File "/home/user/miniconda3/envs/env/lib/python3.8/site-packages/pip/_internal/operations/install/legacy.py", line 74, in install
2022-01-16T23:31:39,724 runner(
2022-01-16T23:31:39,724 File "/home/user/miniconda3/envs/env/lib/python3.8/site-packages/pip/_internal/utils/subprocess.py", line 271, in runner
2022-01-16T23:31:39,724 call_subprocess(
2022-01-16T23:31:39,724 File "/home/user/miniconda3/envs/env/lib/python3.8/site-packages/pip/_internal/utils/subprocess.py", line 240, in call_subprocess
2022-01-16T23:31:39,724 raise InstallationError(exc_msg)
2022-01-16T23:31:39,724 pip._internal.exceptions.InstallationError: Command errored out with exit status 1: /home/user/miniconda3/envs/env/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-wc0b161h/setup.py'"'"'; file='"'"'/tmp/pip-req-build-wc0b161h/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record /tmp/pip-record-psoiog9y/install-record.txt --single-version-externally-managed --compile --install-headers /home/user/miniconda3/envs/env/include/python3.8/torch-quiver Check the logs for full command output.
2022-01-16T23:31:39,791 Removed build tracker: '/tmp/pip-req-tracker-jt_5iad6'

AttributeError: 'Feature' object has no attribute 'dim'

When trying to change the offical example graph_sage_unsup.py of pytorch-geometric to fit torch-quiver, I ran into a problem of "AttributeError: 'Feature' object has no attribute 'dim'", which indicated that quiver.Feature class does not contain an attribute of "dim".
I changed x = data.x.to(device) to x = quiver.Feature(rank=0, device_list=[0], device_cache_size="10M", cache_policy="device_replicate", csr_topo=csr_topo) x.from_cpu_tensor(data.x) as the other examples this repository provided but got a error in the following code

 def full_forward(self, x, edge_index):
      for i, conv in enumerate(self.convs):
          x = conv(x, edge_index)     <---- error code
          if i != self.num_layers - 1:
              x = x.relu()
              x = F.dropout(x, p=0.5, training=self.training)
      return x

The error is

File "/home/user/.local/lib/python3.7/site-packages/torch_scatter/utils.py", line 6, in broadcast
    dim = other.dim() + dim
AttributeError: 'Feature' object has no attribute 'dim'

Shall we add an 'dim' atrribute to quiver.Feature?

[WIP] TODO for alpha release

  • 1 example for PyG, and 1 example for DGL.
  • Pip install: pip install torch-quiver
  • Unit Tests
  • Documents (API design, and design principles)
  • Performance numbers (papers100M dataset)
  • README (Features, Install, Toy Example, APIs for PyG and DGL, ...)
  • Community name: quiver-team
  • Library name: torch-quiver
  • Docker image

Why cuda stream?

你好,我对于您如何在 UVA 模式下进行 GPU 采样的方法很感兴趣,所以进到 C++代码进行了简单的阅读。但是我另外发现了一个有点奇怪的现象,我注意到你们在采样的时候用到了 CUDA Stream,也就是说你们实际模型计算是在默认CUDA 流,但是在 GPU采样时用的是非默认CUDA流。我对这个做法有点疑惑,因为在我们实际进行模型 forward 计算的时候,必须等到GPU采样的结果返回了才可以开始计算,也就是说这里其实并不存在一个可以异步执行的步骤,而必须是同步执行。请问有人可以解答一下吗?这样做会带来速度提升之类的吗?

module 'torch_quiver' has no attribute 'device_quiver_from_csr_array'

Hi, I found an error when using torch-quiver==0.1.1 and tried to run example.

The error is

Traceback (most recent call last):
  File "reddit_quiver.py", line 29, in <module>
    quiver_sampler = quiver.pyg.GraphSageSampler(csr_topo, sizes=[25, 10], device=0, mode='GPU') # Quiver
  File "/usr/local/lib/python3.6/dist-packages/quiver/pyg/sage_sampler.py", line 72, in __init__
    self.quiver = qv.device_quiver_from_csr_array(self.csr_topo.indptr,
AttributeError: module 'torch_quiver' has no attribute 'device_quiver_from_csr_array'

Is there any suggestion for it?

Quiver multi-node ogbn-mag240m benchmark

I installed branch 0.2.0 within a conda env:

  • python 3.9
  • pytorch 1.12.1
  • cuda 10.2.

As a sanity check, I can run examples/reddit-quiver.py and it works without any errors.
However, when I want to run benchmarks/ogbn-mag240m/train_quiver_multi_node.py then I come across many problems. So far, I have been using a single node with a GPU, and I changed preprocessing.py accordingly (preprocess('data/mag', host=0, host_size=1, p2p_group=1, p2p_size=1) as well as cache_policy in the benchmarks itself to the default one.

And this is what I get:

  • warnings that the socket cannot be initialized, should not be important since I want to use a single node?
  • then some libs problems - libibverbs: Could not locate libibgni
  • and the error itself: CUDA error: all CUDA-capable devices are busy or unavailable

I am stuck on this, does anybody know what might be the reasons for the error? Is it even possible to run the benchmark on a single node? If not, what prevents it?

The benchmark output:

Namespace(hidden_channels=1024, batch_size=1024, dropout=0.5, epochs=100, model='graphsage', sizes=[25, 15], in_memory=False, device='0', evaluate=False, host_size=1, local_size=1, host=0)
Global seed set to 42
[W socket.cpp:401] [c10d] The server socket cannot be initialized on [::]:19216 (errno: 97 - Address family not supported by protocol).
[W socket.cpp:558] [c10d] The client socket cannot be initialized to connect to [nid02085]:19216 (errno: 97 - Address family not supported by protocol).
[W socket.cpp:558] [c10d] The client socket cannot be initialized to connect to [nid02085]:19216 (errno: 97 - Address family not supported by protocol).
libibverbs: Could not locate libibgni (/usr/lib64/libibgni.so.1: undefined symbol: verbs_uninit_context)
libibverbs: Warning: couldn't open config directory '/opt/cray/rdma-core/27.1-7.0.3.1_4.6__g4beae6eb.ari/etc/libibverbs.d'.
MAG240: Reading the dataset... LOG >>> Memory Budge On 0 is 4095 MB
feat init 2.8276915550231934
Dataloader set up! [35.83s]
Let's use 1 GPUs!
0 beg
Traceback (most recent call last):
  File "/scratch/snx3000/prenc/torch-quiver/benchmarks/ogbn-mag240m/train_quiver_multi_node.py", line 426, in <module>
    mp.spawn(run,
  File "/scratch/snx3000/prenc/torch-quiver/venv/lib/python3.9/site-packages/torch/multiprocessing/spawn.py", line 240, in spawn
    return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
  File "/scratch/snx3000/prenc/torch-quiver/venv/lib/python3.9/site-packages/torch/multiprocessing/spawn.py", line 198, in start_processes
    while not context.join():
  File "/scratch/snx3000/prenc/torch-quiver/venv/lib/python3.9/site-packages/torch/multiprocessing/spawn.py", line 160, in join
    raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException:

-- Process 0 terminated with the following error:
Traceback (most recent call last):
  File "/scratch/snx3000/prenc/torch-quiver/venv/lib/python3.9/site-packages/torch/multiprocessing/spawn.py", line 69, in _wrap
    fn(i, *args)
  File "/scratch/snx3000/prenc/torch-quiver/benchmarks/ogbn-mag240m/train_quiver_multi_node.py", line 302, in run
    model = GNN(args.model,
  File "/scratch/snx3000/prenc/torch-quiver/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 927, in to
    return self._apply(convert)
  File "/scratch/snx3000/prenc/torch-quiver/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 579, in _apply
    module._apply(fn)
  File "/scratch/snx3000/prenc/torch-quiver/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 579, in _apply
    module._apply(fn)
  File "/scratch/snx3000/prenc/torch-quiver/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 579, in _apply
    module._apply(fn)
  File "/scratch/snx3000/prenc/torch-quiver/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 602, in _apply
    param_applied = fn(param)
  File "/scratch/snx3000/prenc/torch-quiver/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 925, in convert
    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
RuntimeError: CUDA error: all CUDA-capable devices are busy or unavailable
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

Cuda Error, When run the quiver example with the dataset of ogbn-papers100M in the benchmark

您好,我发现了一个bug,当执行的benchmark ogbn-papers100M的示例程序的时候出现了Cuda Error(CUBLAS_STATUS_NOT_INITIALIZED ),当然我已经根据文档中提示已经执行了
图片

我通过Debug发现了具体具体出错的地方,当执行cudaHostRegister时候会返回cuda error,但是没有进行拦截,返回错误code是1,我查了一下具体错误是 cudaErrorInvalidValue
图片

通过日志分析发现,当cudaHostRegister一旦操作进行了30000000000 Bytes HostMapped的时候就会出错,比较奇怪。
图片

辛苦开发者帮忙看看这个问题,谢谢!

An error occurred when installing torch-quiver.

I tried to install torch-quiver, but i got an error as below.

[My environment]

OS : Linux version 3.10.0-1127.el7.x86_64 ([email protected]) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-39) (GCC) ) #1 SMP Tue Mar 31 23:36:51 UTC 2020

gpu : 2080ti

cuda : release 10.2, V10.2.89

python : 3.9.7 in conda

pytorch : '1.10.2' in conda

pyg : 2.0.3 in conda

gcc : 6.5.0

run pip install torch-quiver

Collecting torch-quiver
  Using cached torch_quiver-0.1.1.tar.gz (117 kB)
Building wheels for collected packages: torch-quiver
  Building wheel for torch-quiver (setup.py) ... error
  ERROR: Command errored out with exit status 1:
   command: /home/xxx/anaconda3/envs/torch_quiver/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-kokivq60/torch-quiver_c32f9749bbf34bd6b524a81b30256cd0/setup.py'"'"'; __file__='"'"'/tmp/pip-install-kokivq60/torch-quiver_c32f9749bbf34bd6b524a81b30256cd0/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-h6bay53m
       cwd: /tmp/pip-install-kokivq60/torch-quiver_c32f9749bbf34bd6b524a81b30256cd0/
  Complete output (40 lines):
  /home/xxx/anaconda3/envs/torch_quiver/lib/python3.9/distutils/extension.py:131: UserWarning: Unknown Extension options: 'with_cuda'
    warnings.warn(msg)
  running bdist_wheel
  /home/xxx/anaconda3/envs/torch_quiver/lib/python3.9/site-packages/torch/utils/cpp_extension.py:381: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend.
    warnings.warn(msg.format('we could not find ninja.'))
  running build
  running build_py
  creating build
  creating build/lib.linux-x86_64-3.9
  creating build/lib.linux-x86_64-3.9/quiver
  copying ./srcs/python/quiver/partition.py -> build/lib.linux-x86_64-3.9/quiver
  copying ./srcs/python/quiver/comm.py -> build/lib.linux-x86_64-3.9/quiver
  copying ./srcs/python/quiver/feature.py -> build/lib.linux-x86_64-3.9/quiver
  copying ./srcs/python/quiver/shard_tensor.py -> build/lib.linux-x86_64-3.9/quiver
  copying ./srcs/python/quiver/async_cuda_sampler.py -> build/lib.linux-x86_64-3.9/quiver
  copying ./srcs/python/quiver/__init__.py -> build/lib.linux-x86_64-3.9/quiver
  copying ./srcs/python/quiver/utils.py -> build/lib.linux-x86_64-3.9/quiver
  creating build/lib.linux-x86_64-3.9/quiver/multiprocessing
  copying ./srcs/python/quiver/multiprocessing/reductions.py -> build/lib.linux-x86_64-3.9/quiver/multiprocessing
  copying ./srcs/python/quiver/multiprocessing/__init__.py -> build/lib.linux-x86_64-3.9/quiver/multiprocessing
  creating build/lib.linux-x86_64-3.9/quiver/pyg
  copying ./srcs/python/quiver/pyg/__init__.py -> build/lib.linux-x86_64-3.9/quiver/pyg
  copying ./srcs/python/quiver/pyg/sage_sampler.py -> build/lib.linux-x86_64-3.9/quiver/pyg
  running build_ext
  building 'torch_quiver' extension
  creating build/temp.linux-x86_64-3.9
  creating build/temp.linux-x86_64-3.9/srcs
  creating build/temp.linux-x86_64-3.9/srcs/cpp
  creating build/temp.linux-x86_64-3.9/srcs/cpp/src
  creating build/temp.linux-x86_64-3.9/srcs/cpp/src/quiver
  creating build/temp.linux-x86_64-3.9/srcs/cpp/src/quiver/cpu
  creating build/temp.linux-x86_64-3.9/srcs/cpp/src/quiver/cuda
  creating build/temp.linux-x86_64-3.9/srcs/cpp/src/quiver/torch
  gcc -pthread -B /home/xxx/anaconda3/envs/torch_quiver/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /home/xxx/anaconda3/envs/torch_quiver/include -I/home/xxx/anaconda3/envs/torch_quiver/include -fPIC -O2 -isystem /home/xxx/anaconda3/envs/torch_quiver/include -fPIC -I/tmp/pip-install-kokivq60/torch-quiver_c32f9749bbf34bd6b524a81b30256cd0/./srcs/cpp/include -I/usr/local/cuda/include -I/home/xxx/anaconda3/envs/torch_quiver/lib/python3.9/site-packages/torch/include -I/home/xxx/anaconda3/envs/torch_quiver/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/home/xxx/anaconda3/envs/torch_quiver/lib/python3.9/site-packages/torch/include/TH -I/home/xxx/anaconda3/envs/torch_quiver/lib/python3.9/site-packages/torch/include/THC -I/home/xxx/anaconda3/envs/torch_quiver/include/python3.9 -c srcs/cpp/src/quiver/cpu/tensor.cpp -o build/temp.linux-x86_64-3.9/srcs/cpp/src/quiver/cpu/tensor.o -std=c++17 -DHAVE_CUDA -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=torch_quiver -D_GLIBCXX_USE_CXX11_ABI=0
  /home/xxx/cuda10.2/bin/nvcc -I/tmp/pip-install-kokivq60/torch-quiver_c32f9749bbf34bd6b524a81b30256cd0/./srcs/cpp/include -I/usr/local/cuda/include -I/home/xxx/anaconda3/envs/torch_quiver/lib/python3.9/site-packages/torch/include -I/home/xxx/anaconda3/envs/torch_quiver/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/home/xxx/anaconda3/envs/torch_quiver/lib/python3.9/site-packages/torch/include/TH -I/home/xxx/anaconda3/envs/torch_quiver/lib/python3.9/site-packages/torch/include/THC -I/home/xxx/anaconda3/envs/torch_quiver/include/python3.9 -c srcs/cpp/src/quiver/cuda/quiver_comm.cu -o build/temp.linux-x86_64-3.9/srcs/cpp/src/quiver/cuda/quiver_comm.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options '-fPIC' -O3 --expt-extended-lambda -lnuma -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=torch_quiver -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=compute_75 -gencode=arch=compute_75,code=sm_75 -std=c++14
  /home/xxx/cuda10.2/bin/nvcc -I/tmp/pip-install-kokivq60/torch-quiver_c32f9749bbf34bd6b524a81b30256cd0/./srcs/cpp/include -I/usr/local/cuda/include -I/home/xxx/anaconda3/envs/torch_quiver/lib/python3.9/site-packages/torch/include -I/home/xxx/anaconda3/envs/torch_quiver/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/home/xxx/anaconda3/envs/torch_quiver/lib/python3.9/site-packages/torch/include/TH -I/home/xxx/anaconda3/envs/torch_quiver/lib/python3.9/site-packages/torch/include/THC -I/home/xxx/anaconda3/envs/torch_quiver/include/python3.9 -c srcs/cpp/src/quiver/cuda/quiver_feature.cu -o build/temp.linux-x86_64-3.9/srcs/cpp/src/quiver/cuda/quiver_feature.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options '-fPIC' -O3 --expt-extended-lambda -lnuma -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=torch_quiver -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=compute_75 -gencode=arch=compute_75,code=sm_75 -std=c++14
  srcs/cpp/src/quiver/cuda/quiver_feature.cu(192): warning: arithmetic on pointer to void or function type

  nvcc error   : 'cicc' died due to signal 11 (Invalid memory reference)
  error: command '/home/xxx/cuda10.2/bin/nvcc' failed with exit code 11
  ----------------------------------------
  ERROR: Failed building wheel for torch-quiver

Thanks.

How to develop quiver

I am trying to develop quiver.
I only add -e in current install.sh measure python3 -m pip install --no-index -U -e .
pypa/setuptools#3755
I change the torch-quiver/srcs/cpp/src/quiver/quiver.cpp, and then

cd build 
cmake ..
make 

The output :

/usr/bin/cmake -H/root/share/torch-quiver -B/root/share/torch-quiver/build --check-build-system CMakeFiles/Makefile.cmake 0
/usr/bin/cmake -E cmake_progress_start /root/share/torch-quiver/build/CMakeFiles /root/share/torch-quiver/build/CMakeFiles/progress.marks
make -f CMakeFiles/Makefile2 all
make[1]: Entering directory '/root/share/torch-quiver/build'
make[1]: Nothing to be done for 'all'.
make[1]: Leaving directory '/root/share/torch-quiver/build'
/usr/bin/cmake -E cmake_progress_start /root/share/torch-quiver/build/CMakeFiles 0

Why?
This is because make is integrated at setup.py, we only need pip install --no-index -U -e . again.

My env:

OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.27

Python version: 3.8.12 (default, Oct 12 2021, 13:49:34)  [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-4.15.0-167-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 11.3.109
CUDA_MODULE_LOADING set to:
GPU models and configuration:
GPU 0: Tesla V100-SXM2-16GB
GPU 1: Tesla V100-SXM2-16GB
GPU 2: Tesla V100-SXM2-16GB

Nvidia driver version: 515.65.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.2.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

Versions of relevant libraries:
[pip3] numpy==1.21.2
[pip3] torch==1.11.0
[pip3] torchelastic==0.2.2
[pip3] torchtext==0.12.0
[pip3] torchvision==0.12.0
[conda] blas                      1.0                         mkl
[conda] cudatoolkit               11.3.1               ha36c431_9    nvidia
[conda] ffmpeg                    4.3                  hf484d3e_0    pytorch
[conda] mkl                       2021.4.0           h06a4308_640
[conda] mkl-service               2.4.0            py38h7f8727e_0
[conda] mkl_fft                   1.3.1            py38hd3c417c_0
[conda] mkl_random                1.2.2            py38h51133e4_0
[conda] numpy                     1.21.2           py38h20f2e39_0
[conda] numpy-base                1.21.2           py38h79a1101_0
[conda] pytorch                   1.11.0          py3.8_cuda11.3_cudnn8.2.0_0    pytorch
[conda] pytorch-mutex             1.0                        cuda    pytorch
[conda] torchelastic              0.2.2                    pypi_0    pypi
[conda] torchtext                 0.12.0                     py38    pytorch
[conda] torchvision               0.12.0               py38_cu113    pytorch

quiver sampler has no rand?

each call quiver_sample.sample(seed), same seed return same result, how to make same seed return diff result?

implement partition based sampling

Currently we store all the graph topology in the GPU (in the quiver class), this does not scale when the graph becomes large.
We need to implement a new data structure that only store a partition of a graph, and port the sampling methods implemented to the new data structure.

Installation Problem: which GNU version does torch-quiver require?

Hi, I tried to install quiver with pip install torch-quiver, but ran into GCC version problems:

When I export GCC5.4 in system path, the command ended up with error message as follows:

.....
    In file included from /usr/local/cuda/include/cuda_runtime.h:62:0, 
                     from <command-line>:0:
    /usr/local/cuda/include/host_config.h:105:2: error: #error -- unsupported GNU version! gcc 4.10 and up are not supported!
     #error -- unsupported GNU version! gcc 4.10 and up are not supported!
      ^
.....

But when I turn to GCC4.9.2, the command ended up with error message as follows:

.....
c++: error: unrecognized command line option ‘-std=c++17’
.....

    In file included from /home/data/xzhanggb/envs/miniconda3/envs/pyg/lib/python3.8/site-packages/torch/include/c10/core/impl/InlineDeviceGu
ard.h:9:0,
                     from /home/data/xzhanggb/envs/miniconda3/envs/pyg/lib/python3.8/site-packages/torch/include/c10/core/DeviceGuard.h:3,
                     from /home/data/xzhanggb/envs/miniconda3/envs/pyg/lib/python3.8/site-packages/torch/include/c10/cuda/CUDAStream.h:8,
                     from /tmp/pip-install-juxpgiz9/torch-quiver_2611d491e46d44dcaef859934cd2dc32/srcs/cpp/src/quiver/cuda/quiver_comm.cu:1:
    /home/data/xzhanggb/envs/miniconda3/envs/pyg/lib/python3.8/site-packages/torch/include/c10/util/C++17.h:16:2: error: #error "You're tryin
g to build PyTorch with a too old version of GCC. We need GCC 5 or later."
     #error \
      ^

So I am confused about which GCC version to use...

I am using Python3.8, CUDA 10.2, PyTorch 1.9.1+cu102, torch-geometric 2.0.4 on CentOS 7.9.2009

Does Quiver's installation have additional system requirements or occasionally break in my specific environment? Thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.