Comments (7)
Hi Pedro! Can you please share the output that is written to the command line when UniRes is run? Thanks
from unires.
Hi Mikael!
The output for the multi-modal setup is as follows:
unires /data/pedro/MPMGen-validation/Downsampled-Data/SABRE-new/sub-974/SABRE-sub-974-Native_input-MPRAGE.nii.gz /data/pedro/MPMGen-validation/Downsampled-Data/SABRE-new/sub-974/SABRE-sub-974-Native_input-FLAIR.nii.gz /data/pedro/MPMGen-validation/Downsampled-Data/SABRE-new/sub-974/SABRE-sub-974-Native_input-T2SE.nii.gz
| | | |_ __ () _ \ ___ ___
| | | | ' | | |) / _ / __|
| || | | | | | _ < /_
_/|| |||| __||__/
22/11/2023 13:23:38 | GPU: Quadro RTX 8000, CUDA: True, PyTorch: 1.10.0a0+0aef44c
Input
c=0, n=0 | fname=/data/pedro/MPMGen-validation/Downsampled-Data/SABRE-new/sub-974/SABRE-sub-974-Native_input-MPRAGE.nii.gz
c=1, n=0 | fname=/data/pedro/MPMGen-validation/Downsampled-Data/SABRE-new/sub-974/SABRE-sub-974-Native_input-FLAIR.nii.gz
c=2, n=0 | fname=/data/pedro/MPMGen-validation/Downsampled-Data/SABRE-new/sub-974/SABRE-sub-974-Native_input-T2SE.nii.gz
Estimating model hyper-parameters... completed in 1.89131 seconds:
c=0 | tau= 5.082e+04 | sd= 0.004436 | mu= 0.1878 | ct=False
c=1 | tau= 1.737e+12 | sd= 7.587e-07 | mu= 0.1896 | ct=False
c=2 | tau= 1.473e+12 | sd= 8.239e-07 | mu= 0.1985 | ct=False
Performing /opt/conda/lib/python3.8/site-packages/nitorch/tools/affine_reg/_core.py:69: UserWarning: torch.solve is deprecated in favor of torch.linalg.solveand will be removed in a future PyTorch release.
torch.linalg.solve has its arguments reversed and does not return the LU factorization.
To get the LU factorization see torch.lu, which can be used with torch.lu_solve or torch.lu_unpack.
X = torch.solve(B, A).solution
should be replaced with
X = torch.linalg.solve(A, B) (Triggered internally at /opt/pytorch/pytorch/aten/src/ATen/native/BatchLinearAlgebra.cpp:758.)
sf = torch.tensor([mn_out, mx_out], dtype=dtype, device=device)[..., None].solve(sf)[0].squeeze()
/opt/conda/lib/python3.8/site-packages/nitorch/tools/affine_reg/_core.py:320: UserWarning: floordiv is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
size_pad = (torch.tensor(size_pad) - 1) // 2
/opt/conda/lib/python3.8/site-packages/nitorch/tools/affine_reg/_costs.py:220: UserWarning: floordiv is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
p = (torch.tensor(p) - 1) // 2
multi-channel (N=3) alignment...completed in 7.42558 seconds.
/opt/conda/lib/python3.8/site-packages/nitorch/tools/_preproc_utils.py:289: UserWarning: floordiv is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
R22 = R2.mm((torch.sum(R2, dim=0, keepdim=True).t()//2 - 1)*rdim)
/opt/conda/lib/python3.8/site-packages/nitorch/tools/_preproc_utils.py:385: UserWarning: torch.cholesky is deprecated in favor of torch.linalg.cholesky and will be removed in a future PyTorch release.
L = torch.cholesky(A)
should be replaced with
L = torch.linalg.cholesky(A)
and
U = torch.cholesky(A, upper=True)
should be replaced with
U = torch.linalg.cholesky(A).transpose(-2, -1).conj().
This transform will produce equivalent results for all valid (symmetric positive definite) inputs. (Triggered internally at /opt/pytorch/pytorch/aten/src/ATen/native/BatchLinearAlgebra.cpp:1274.)
C = torch.cholesky(R.t().mm(R))
Mean space | dim=(182, 259, 258), vx=(1.0, 1.0, 1.0)
/opt/conda/lib/python3.8/site-packages/unires/_project.py:281: UserWarning: floordiv is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
off = -(off - 1) // 2 # set offset
ADMM step-size=5371.2163
Starting super-resolution (update_rigid=True, update_scaling=True)
| C=3 | N=3 | device=cuda | max_iter=512 | tol=0.0001 | sched_num=3
0 - Convergence ( 9.3 s) | nlyx = 2.854e+12, nlxy = 2.854e+12, nly = 9.939e+07, gain = inf
1 - Convergence ( 9.6 s) | nlyx = 2.595e+11, nlxy = 2.594e+11, nly = 1.053e+08, gain = 1.0000000
2 - Convergence (10.2 s) | nlyx = 2.317e+11, nlxy = 2.316e+11, nly = 1.054e+08, gain = 0.0106141
3 - Convergence (11.2 s) | nlyx = 1.268e+11, nlxy = 1.267e+11, nly = 1.063e+08, gain = 0.0384521
4 - Convergence (10.1 s) | nlyx = 1.178e+11, nlxy = 1.176e+11, nly = 1.064e+08, gain = 0.0033106
5 - Convergence (10.1 s) | nlyx = 1.134e+11, nlxy = 1.133e+11, nly = 1.064e+08, gain = 0.0015887
6 - Convergence (10.2 s) | nlyx = 1.012e+11, nlxy = 1.011e+11, nly = 1.066e+08, gain = 0.0044194
7 - Convergence (10.4 s) | nlyx = 9.472e+10, nlxy = 9.462e+10, nly = 1.067e+08, gain = 0.0023591
8 - Convergence (11.0 s) | nlyx = 8.603e+10, nlxy = 8.592e+10, nly = 1.069e+08, gain = 0.0031407
9 - Convergence (10.1 s) | nlyx = 8.482e+10, nlxy = 8.472e+10, nly = 1.069e+08, gain = 0.0004364
10 - Convergence (10.1 s) | nlyx = 8.41e+10, nlxy = 8.399e+10, nly = 1.069e+08, gain = 0.0002623
11 - Convergence (10.1 s) | nlyx = 8.345e+10, nlxy = 8.335e+10, nly = 1.069e+08, gain = 0.0002325
12 - Convergence (10.1 s) | nlyx = 8.286e+10, nlxy = 8.275e+10, nly = 1.069e+08, gain = 0.0002151
13 - Convergence (10.0 s) | nlyx = 8.227e+10, nlxy = 8.217e+10, nly = 1.069e+08, gain = 0.0002108
14 - Convergence (10.1 s) | nlyx = 8.171e+10, nlxy = 8.161e+10, nly = 1.069e+08, gain = 0.0002011
15 - Convergence (10.1 s) | nlyx = 8.117e+10, nlxy = 8.106e+10, nly = 1.069e+08, gain = 0.0001968
16 - Convergence (11.0 s) | nlyx = 4.771e+10, nlxy = 4.76e+10, nly = 1.075e+08, gain = 0.0119237
17 - Convergence (11.0 s) | nlyx = 4.144e+10, nlxy = 4.134e+10, nly = 1.076e+08, gain = 0.0022277
18 - Convergence (10.1 s) | nlyx = 4.098e+10, nlxy = 4.087e+10, nly = 1.076e+08, gain = 0.0001667
19 - Convergence (10.1 s) | nlyx = 4.065e+10, nlxy = 4.055e+10, nly = 1.076e+08, gain = 0.0001139
20 - Convergence (10.2 s) | nlyx = 4.038e+10, nlxy = 4.028e+10, nly = 1.076e+08, gain = 0.0000966
21 - Convergence (10.1 s) | nlyx = 4.014e+10, nlxy = 4.003e+10, nly = 1.076e+08, gain = 0.0000872
22 - Convergence (10.4 s) | nlyx = 3.805e+10, nlxy = 3.794e+10, nly = 1.077e+08, gain = 0.0007418
23 - Convergence (10.1 s) | nlyx = 3.75e+10, nlxy = 3.739e+10, nly = 1.077e+08, gain = 0.0001952
24 - Convergence (11.6 s) | nlyx = 1.152e+11, nlxy = 1.151e+11, nly = 5.586e+07, gain = -0.0275841
25 - Convergence (10.5 s) | nlyx = 4.9e+10, nlxy = 4.894e+10, nly = 5.544e+07, gain = 0.0235011
26 - Convergence (10.1 s) | nlyx = 4.021e+10, nlxy = 4.016e+10, nly = 5.537e+07, gain = 0.0031187
27 - Convergence ( 9.9 s) | nlyx = 3.833e+10, nlxy = 3.827e+10, nly = 5.537e+07, gain = 0.0006703
28 - Convergence ( 9.9 s) | nlyx = 3.718e+10, nlxy = 3.712e+10, nly = 5.536e+07, gain = 0.0004083
29 - Convergence (10.0 s) | nlyx = 3.624e+10, nlxy = 3.618e+10, nly = 5.535e+07, gain = 0.0003329
30 - Convergence (10.2 s) | nlyx = 2.965e+10, nlxy = 2.959e+10, nly = 5.531e+07, gain = 0.0023340
31 - Convergence (10.1 s) | nlyx = 2.894e+10, nlxy = 2.889e+10, nly = 5.53e+07, gain = 0.0002488
32 - Convergence ( 9.5 s) | nlyx = 2.838e+10, nlxy = 2.832e+10, nly = 5.529e+07, gain = 0.0002000
33 - Convergence (10.3 s) | nlyx = 2.695e+10, nlxy = 2.689e+10, nly = 5.525e+07, gain = 0.0005061
34 - Convergence (10.0 s) | nlyx = 2.661e+10, nlxy = 2.656e+10, nly = 5.524e+07, gain = 0.0001193
35 - Convergence (10.2 s) | nlyx = 2.633e+10, nlxy = 2.627e+10, nly = 5.524e+07, gain = 0.0001001
36 - Convergence (10.2 s) | nlyx = 2.608e+10, nlxy = 2.602e+10, nly = 5.524e+07, gain = 0.0000882
37 - Convergence ( 9.9 s) | nlyx = 2.586e+10, nlxy = 2.58e+10, nly = 5.523e+07, gain = 0.0000787
38 - Convergence (10.3 s) | nlyx = 2.565e+10, nlxy = 2.56e+10, nly = 5.523e+07, gain = 0.0000718
39 - Convergence (10.3 s) | nlyx = 2.547e+10, nlxy = 2.541e+10, nly = 5.522e+07, gain = 0.0000660
40 - Convergence (10.1 s) | nlyx = 2.529e+10, nlxy = 2.524e+10, nly = 5.522e+07, gain = 0.0000617
41 - Convergence ( 9.8 s) | nlyx = 2.513e+10, nlxy = 2.507e+10, nly = 5.521e+07, gain = 0.0000578
42 - Convergence (10.2 s) | nlyx = 2.497e+10, nlxy = 2.492e+10, nly = 5.521e+07, gain = 0.0000548
43 - Convergence (10.4 s) | nlyx = 2.483e+10, nlxy = 2.477e+10, nly = 5.521e+07, gain = 0.0000522
44 - Convergence ( 9.4 s) | nlyx = 2.468e+10, nlxy = 2.463e+10, nly = 5.52e+07, gain = 0.0000501
45 - Convergence (10.3 s) | nlyx = 2.455e+10, nlxy = 2.449e+10, nly = 5.52e+07, gain = 0.0000484
46 - Convergence (10.3 s) | nlyx = 2.844e+10, nlxy = 2.841e+10, nly = 2.891e+07, gain = -0.0013762
47 - Convergence (10.2 s) | nlyx = 2.481e+10, nlxy = 2.478e+10, nly = 2.887e+07, gain = 0.0012824
48 - Convergence (10.0 s) | nlyx = 2.398e+10, nlxy = 2.395e+10, nly = 2.884e+07, gain = 0.0002952
49 - Convergence (10.0 s) | nlyx = 2.368e+10, nlxy = 2.365e+10, nly = 2.882e+07, gain = 0.0001051
50 - Convergence (10.5 s) | nlyx = 2.346e+10, nlxy = 2.343e+10, nly = 2.88e+07, gain = 0.0000781
51 - Convergence (10.0 s) | nlyx = 2.33e+10, nlxy = 2.327e+10, nly = 2.878e+07, gain = 0.0000549
52 - Convergence (10.1 s) | nlyx = 2.318e+10, nlxy = 2.315e+10, nly = 2.876e+07, gain = 0.0000450
53 - Convergence (10.3 s) | nlyx = 2.306e+10, nlxy = 2.303e+10, nly = 2.875e+07, gain = 0.0000426
54 - Convergence (10.3 s) | nlyx = 2.294e+10, nlxy = 2.291e+10, nly = 2.873e+07, gain = 0.0000412
55 - Convergence (10.3 s) | nlyx = 2.283e+10, nlxy = 2.28e+10, nly = 2.871e+07, gain = 0.0000400
56 - Convergence (10.3 s) | nlyx = 2.272e+10, nlxy = 2.269e+10, nly = 2.87e+07, gain = 0.0000390
57 - Convergence (10.3 s) | nlyx = 2.261e+10, nlxy = 2.258e+10, nly = 2.868e+07, gain = 0.0000381
58 - Convergence (10.3 s) | nlyx = 2.25e+10, nlxy = 2.247e+10, nly = 2.866e+07, gain = 0.0000372
59 - Convergence (10.3 s) | nlyx = 2.24e+10, nlxy = 2.237e+10, nly = 2.865e+07, gain = 0.0000364
60 - Convergence (10.3 s) | nlyx = 2.23e+10, nlxy = 2.227e+10, nly = 2.863e+07, gain = 0.0000357
61 - Convergence (10.3 s) | nlyx = 2.22e+10, nlxy = 2.217e+10, nly = 2.862e+07, gain = 0.0000350
62 - Convergence (10.3 s) | nlyx = 2.21e+10, nlxy = 2.207e+10, nly = 2.861e+07, gain = 0.0000343
63 - Convergence (10.3 s) | nlyx = 2.201e+10, nlxy = 2.198e+10, nly = 2.859e+07, gain = 0.0000337
64 - Convergence (10.3 s) | nlyx = 2.191e+10, nlxy = 2.188e+10, nly = 2.858e+07, gain = 0.0000331
65 - Convergence (10.3 s) | nlyx = 2.182e+10, nlxy = 2.179e+10, nly = 2.857e+07, gain = 0.0000325
66 - Convergence (10.3 s) | nlyx = 2.173e+10, nlxy = 2.17e+10, nly = 2.855e+07, gain = 0.0000319
67 - Convergence (10.3 s) | nlyx = 2.164e+10, nlxy = 2.161e+10, nly = 2.854e+07, gain = 0.0000314
68 - Convergence (11.0 s) | nlyx = 2.335e+10, nlxy = 2.333e+10, nly = 1.886e+07, gain = -0.0006021
69 - Convergence (11.2 s) | nlyx = 2.138e+10, nlxy = 2.136e+10, nly = 1.883e+07, gain = 0.0006950
70 - Convergence (10.2 s) | nlyx = 1.912e+10, nlxy = 1.91e+10, nly = 1.879e+07, gain = 0.0007962
71 - Convergence (10.1 s) | nlyx = 1.839e+10, nlxy = 1.837e+10, nly = 1.876e+07, gain = 0.0002574
72 - Convergence (10.1 s) | nlyx = 1.821e+10, nlxy = 1.819e+10, nly = 1.873e+07, gain = 0.0000628
73 - Convergence (10.0 s) | nlyx = 1.809e+10, nlxy = 1.807e+10, nly = 1.87e+07, gain = 0.0000448
74 - Convergence (10.3 s) | nlyx = 1.785e+10, nlxy = 1.783e+10, nly = 1.868e+07, gain = 0.0000829
75 - Convergence (10.2 s) | nlyx = 1.774e+10, nlxy = 1.772e+10, nly = 1.866e+07, gain = 0.0000387
76 - Convergence (10.3 s) | nlyx = 1.764e+10, nlxy = 1.762e+10, nly = 1.863e+07, gain = 0.0000357
77 - Convergence (10.7 s) | nlyx = 1.748e+10, nlxy = 1.746e+10, nly = 1.861e+07, gain = 0.0000574
78 - Convergence (10.3 s) | nlyx = 1.738e+10, nlxy = 1.736e+10, nly = 1.859e+07, gain = 0.0000333
79 - Convergence (10.3 s) | nlyx = 1.729e+10, nlxy = 1.727e+10, nly = 1.857e+07, gain = 0.0000313
80 - Convergence (10.3 s) | nlyx = 1.721e+10, nlxy = 1.719e+10, nly = 1.854e+07, gain = 0.0000297
81 - Convergence (10.3 s) | nlyx = 1.713e+10, nlxy = 1.711e+10, nly = 1.852e+07, gain = 0.0000284
82 - Convergence (10.3 s) | nlyx = 1.705e+10, nlxy = 1.703e+10, nly = 1.85e+07, gain = 0.0000273
83 - Convergence (10.3 s) | nlyx = 1.698e+10, nlxy = 1.696e+10, nly = 1.849e+07, gain = 0.0000262
84 - Convergence (10.3 s) | nlyx = 1.691e+10, nlxy = 1.689e+10, nly = 1.847e+07, gain = 0.0000253
85 - Convergence (10.3 s) | nlyx = 1.684e+10, nlxy = 1.682e+10, nly = 1.845e+07, gain = 0.0000245
86 - Convergence (10.3 s) | nlyx = 1.677e+10, nlxy = 1.675e+10, nly = 1.843e+07, gain = 0.0000238
87 - Convergence (10.3 s) | nlyx = 1.67e+10, nlxy = 1.668e+10, nly = 1.841e+07, gain = 0.0000231
88 - Convergence (10.3 s) | nlyx = 1.664e+10, nlxy = 1.662e+10, nly = 1.84e+07, gain = 0.0000224
89 - Convergence (10.3 s) | nlyx = 1.658e+10, nlxy = 1.656e+10, nly = 1.838e+07, gain = 0.0000218
90 - Convergence (10.3 s) | nlyx = 1.652e+10, nlxy = 1.65e+10, nly = 1.836e+07, gain = 0.0000213
91 - Convergence (10.3 s) | nlyx = 1.646e+10, nlxy = 1.644e+10, nly = 1.835e+07, gain = 0.0000207
92 - Convergence (10.3 s) | nlyx = 1.64e+10, nlxy = 1.638e+10, nly = 1.833e+07, gain = 0.0000203
93 - Convergence (10.3 s) | nlyx = 1.634e+10, nlxy = 1.633e+10, nly = 1.832e+07, gain = 0.0000198
super-resolution finished in 963.24033 seconds and 94 iterations
from unires.
Thank you! You said that the result looks fine if you pass FLAIR and MPRAGE individually, does that also apply to the T2SE?
from unires.
Yes, though the T2SE also looks super-resolved when passed alongside MPRAGE and FLAIR
from unires.
Okay, can you try to set unified_rigid=False
and scaling=False
in the settings and let me know how that goes?
from unires.
Even with both options set to False, the issue seems to persist; I also tried with other subjects and observed the same phenomenon.
from unires.
Turns out setting the ADMM step size to 1 fixed the problem -- that is the rho
parameter in the settings struct.
from unires.
Related Issues (15)
- Deal with cross-talk?
- Implement projection operator as torch layer
- Support read/write nifti when large number of observations. HOT 1
- Check accuracy of noise estimate
- Refactor into modules instead of using a Class
- Boolean values are difficult to set in the command-line tool HOT 1
- Pre-conditioning CG
- Add release/tag and dependencies version HOT 5
- nitorch.core.datasets.download_url name change HOT 2
- preprocessing with Label HOT 7
- Question about how to create low-resolution images from high-resolution images to run on UniRes HOT 1
- Does this also work with Head CT data? HOT 4
- Error when trying to run HOT 10
- Error when trying to run
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from unires.