--------------------------------------- Captured log call --------------------------------------
ERROR ignite.engine.engine.Engine:engine.py:1086 Current run is terminating due to exception: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
File "/data2/rjli/gnina-torch/gninatorch/models.py", line 320, in forward
x = x.view(-1, self.features_out_size)
lig_pose_raw = self.lig_pose(x)
~~~~~~~~~~~~~ <--- HERE
lig_pose_log = F.log_softmax(lig_pose_raw, dim=1)
File "/data2/rjli/mambaforge/envs/gninatorch/lib/python3.9/site-packages/torch/nn/modules/container.py", line 141, in forward
def forward(self, input):
for module in self:
input = module(input)
~~~~~~ <--- HERE
return input
File "/data2/rjli/mambaforge/envs/gninatorch/lib/python3.9/site-packages/torch/nn/modules/linear.py", line 103, in forward
def forward(self, input: Tensor) -> Tensor:
return F.linear(input, self.weight, self.bias)
~~~~~~~~ <--- HERE
RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
ERROR ignite.engine.engine.Engine:engine.py:992 Engine run is terminating due to exception: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
File "/data2/rjli/gnina-torch/gninatorch/models.py", line 320, in forward
x = x.view(-1, self.features_out_size)
lig_pose_raw = self.lig_pose(x)
~~~~~~~~~~~~~ <--- HERE
lig_pose_log = F.log_softmax(lig_pose_raw, dim=1)
File "/data2/rjli/mambaforge/envs/gninatorch/lib/python3.9/site-packages/torch/nn/modules/container.py", line 141, in forward
def forward(self, input):
for module in self:
input = module(input)
~~~~~~ <--- HERE
return input
File "/data2/rjli/mambaforge/envs/gninatorch/lib/python3.9/site-packages/torch/nn/modules/linear.py", line 103, in forward
def forward(self, input: Tensor) -> Tensor:
return F.linear(input, self.weight, self.bias)
~~~~~~~~ <--- HERE
RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
=============================================================================================== warnings summary ================================================================================================
../mambaforge/envs/gninatorch/lib/python3.9/site-packages/mlflow/utils/requirements_utils.py:12
/data2/rjli/mambaforge/envs/gninatorch/lib/python3.9/site-packages/mlflow/utils/requirements_utils.py:12: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
import pkg_resources
../mambaforge/envs/gninatorch/lib/python3.9/site-packages/pkg_resources/__init__.py:2871
/data2/rjli/mambaforge/envs/gninatorch/lib/python3.9/site-packages/pkg_resources/__init__.py:2871: DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('mpl_toolkits')`.
Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages
declare_namespace(pkg)
../mambaforge/envs/gninatorch/lib/python3.9/site-packages/pkg_resources/__init__.py:2871
/data2/rjli/mambaforge/envs/gninatorch/lib/python3.9/site-packages/pkg_resources/__init__.py:2871: DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('google')`.
Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages
declare_namespace(pkg)
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
---------- coverage: platform linux, python 3.9.16-final-0 -----------
Name Stmts Miss Cover
-----------------------------------------------
gninatorch/__init__.py 5 0 100%
gninatorch/dataloaders.py 73 1 99%
gninatorch/gnina.py 105 7 93%
gninatorch/inference.py 87 50 43%
gninatorch/losses.py 32 0 100%
gninatorch/metrics.py 28 0 100%
gninatorch/models.py 218 39 82%
gninatorch/setup.py 12 0 100%
gninatorch/training.py 235 67 71%
gninatorch/transforms.py 20 0 100%
gninatorch/utils.py 39 16 59%
-----------------------------------------------
TOTAL 854 180 79%
============================================================================================ short test summary info ============================================================================================
FAILED tests/test_gnina.py::test_gnina_model_prediction[redock_default2018-CNNscore0-CNNaffinity0] - RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
FAILED tests/test_gnina.py::test_gnina_model_prediction[general_default2018-CNNscore1-CNNaffinity1] - RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
FAILED tests/test_gnina.py::test_gnina_model_prediction[crossdock_default2018-CNNscore2-CNNaffinity2] - RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
FAILED tests/test_gnina.py::test_gnina_model_prediction[dense-CNNscore3-CNNaffinity3] - RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
FAILED tests/test_gnina.py::test_gnina_model_prediction_ensemble[redock_default2018-CNNscore0-CNNaffinity0-CNNvariance0] - RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
FAILED tests/test_gnina.py::test_gnina_model_prediction_ensemble[general_default2018-CNNscore1-CNNaffinity1-CNNvariance1] - RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
FAILED tests/test_gnina.py::test_gnina_model_prediction_ensemble[crossdock_default2018-CNNscore2-CNNaffinity2-CNNvariance2] - RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
FAILED tests/test_gnina.py::test_gnina_model_prediction_ensemble[dense-CNNscore3-CNNaffinity3-CNNvariance3] - RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
FAILED tests/test_gnina.py::test_gnina[redock_default2018-CNNscore0-CNNaffinity0] - RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
FAILED tests/test_gnina.py::test_gnina[general_default2018-CNNscore1-CNNaffinity1] - RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
FAILED tests/test_gnina.py::test_gnina[crossdock_default2018-CNNscore2-CNNaffinity2] - RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
FAILED tests/test_gnina.py::test_gnina[dense-CNNscore3-CNNaffinity3] - RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
FAILED tests/test_gnina.py::test_gnina_ensemble[redock_default2018_ensemble-CNNscore0-CNNaffinity0-CNNvariance0] - RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
FAILED tests/test_gnina.py::test_gnina_ensemble[general_default2018_ensemble-CNNscore1-CNNaffinity1-CNNvariance1] - RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
FAILED tests/test_gnina.py::test_gnina_ensemble[crossdock_default2018_ensemble-CNNscore2-CNNaffinity2-CNNvariance2] - RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
FAILED tests/test_gnina.py::test_gnina_ensemble[dense_ensemble-CNNscore3-CNNaffinity3-CNNvariance3] - RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
FAILED tests/test_gnina.py::test_gnina_ensemble[default-CNNscore4-CNNaffinity4-CNNvariance4] - RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
FAILED tests/test_inference.py::test_inference - RuntimeError: The following operation failed in the TorchScript interpreter.
FAILED tests/test_inference.py::test_inference_affinity - RuntimeError: The following operation failed in the TorchScript interpreter.
FAILED tests/test_inference.py::test_inference_flex - RuntimeError: The following operation failed in the TorchScript interpreter.
FAILED tests/test_models.py::test_forward_pose[default2017] - RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
FAILED tests/test_models.py::test_forward_pose[default2018] - RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
FAILED tests/test_models.py::test_forward_pose[dense] - RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
FAILED tests/test_models.py::test_forward_affinity[default2017] - RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
FAILED tests/test_models.py::test_forward_affinity[default2018] - RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
FAILED tests/test_models.py::test_forward_affinity[dense] - RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
FAILED tests/test_models.py::test_forward_affinity[hires_pose] - RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
FAILED tests/test_models.py::test_forward_flex[default2017] - RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
FAILED tests/test_models.py::test_forward_flex[default2018] - RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
FAILED tests/test_models.py::test_forward_flex[dense] - RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
FAILED tests/test_models.py::test_forward_affinity_big[default2017] - RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
FAILED tests/test_models.py::test_forward_affinity_big[default2018] - RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
FAILED tests/test_models.py::test_forward_affinity_big[dense] - RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
FAILED tests/test_models.py::test_forward_affinity_big[hires_pose] - RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
FAILED tests/test_models.py::test_forward_affinity_big[hires_affinity] - RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
FAILED tests/test_models.py::test_gnina_model_ensemble_average[default2017] - RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
FAILED tests/test_models.py::test_gnina_model_ensemble_average[default2018] - RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
FAILED tests/test_models.py::test_gnina_model_ensemble_average[dense] - RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
FAILED tests/test_training.py::test_training - RuntimeError: The following operation failed in the TorchScript interpreter.
FAILED tests/test_training.py::test_training_with_test - RuntimeError: The following operation failed in the TorchScript interpreter.
FAILED tests/test_training.py::test_training_pose_and_affinity_with_test - RuntimeError: The following operation failed in the TorchScript interpreter.
FAILED tests/test_training.py::test_training_lr_scheduler_with_test - RuntimeError: The following operation failed in the TorchScript interpreter.
FAILED tests/test_training.py::test_training_flexposepose_with_test - RuntimeError: The following operation failed in the TorchScript interpreter.
============================================================================= 43 failed, 88 passed, 3 warnings in 149.65s (0:02:29)