pycroscopy / atomai Goto Github PK
View Code? Open in Web Editor NEWDeep and Machine Learning for Microscopy
Home Page: https://atomai.readthedocs.io/
License: MIT License
Deep and Machine Learning for Microscopy
Home Page: https://atomai.readthedocs.io/
License: MIT License
Dear Maxim,
Thanks for sharing this amazing package. I am trying out the r-VAE module after the workshop and I found it runs significantly slower than the VAE. First, I am wondering if this is usually the case and second, I am wondering if I can use GPU to accelerate the training.
Thanks,
Leixin
Currently, we have rather archaic data augmentation pipelines, which employ a combination of scikit-learn and open-cv image processing functions. This leads to a significant slow down of the training with on-the-fly data augmentation. It would be nice if we can instead use kornia computer vision library that was specifically designed for applications in deep learning.
Amazing package! Thanks so much for the development!
Is it possible to change learning rate within the .fit
call on a atomai model? For example:
from atomai.models import rVAE
rvae = rVAE( input_dim, latent_dim = 2, conv_encoder = True, numlayers_encoder = 3, numlayers_decoder = 3 )
rvae.fit( training_stack, training_cycles = 100, batch_size = 100, **lr = 0.0005** )
I am unsure how to pass the correct params to torch.optim.Adam()
in .compile_trainer()
. Should it look something like?:
from atomai.models import rVAE
import torch
rvae = rVAE( input_dim, latent_dim = 2, conv_encoder = True, numlayers_encoder = 3, numlayers_decoder = 3 )
rvae.compile_trainer( imstack_train, optimizer = torch.optim.Adam( params = ??, lr = 0.0005 ), training_cycles = 100, batch_size = 100)
rvae.fit()
Thanks!
Python 3.6 is not supported and should be removed from the setup.py and github actions workflow. @ziatdinovmax are you ok if I go ahead with this.
Below code expect a 2 dimensional z_mean (like the vanilla vae case)
z_mean, z_std = rvae.encode(norm_patches)
rvae.decode(z_mean)
Tried a hacky way to decode following: forward_compute_elbo
z_mean, z_std = rvae.encode(norm_patches)
z1, z2, z3 = z_mean[:,0], z_mean[:, 1:3], z_mean[:, 3:]
x_coord_ = rvae.x_coord.expand(norm_patches.shape[0], *rvae.x_coord.size()).cpu()
phi = torch.tensor(z1) # angle
dx = torch.tensor(z2) # translation
dx = (dx * rvae.dx_prior).unsqueeze(1)
x_coord_ = utils.transform_coordinates(x_coord_, phi, dx)
decoded_patches = rvae.decoder_net(x_coord_.to("cuda"), torch.tensor(z3).to("cuda"))
It will be useful to have this functionality directly using vae.decode(z_mean)
After pip installing atomai, on my windows laptop we get the following error that causes the python kernel to crash
Python 3.9.7 (default, Sep 16 2021, 16:59:28) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import atomai
OMP: Error #15: Initializing libiomp5md.dll, but found libiomp5md.dll already initialized.
OMP: Hint This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect results. The best thing to do is to ensure that only a single OpenMP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any library. As an unsafe, unsupported, undocumented workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute, but that may cause crashes or silently produce incorrect results. For more information, please see http://www.intel.com/software/products/support/.
I installed atomai in a conda python 3.11 environment, and I got the following when attempting to import the package:
Python 3.11.5 (main, Sep 11 2023, 13:54:46) [GCC 11.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import atomai as aoi
Traceback (most recent call last):
File "/home/may/.conda/envs/intersect/lib/python3.11/site-packages/requests/compat.py", line 11, in <module>
import chardet
ModuleNotFoundError: No module named 'chardet'
Manually installing chardet
package fixed the problem, so hopefully updating requirements.txt
will fix the problem.
Are you planning to open a discussion section? I have something to discuss. Shall I continue here?
I installed atomai 0.7.4 and numpy 1.26.0. Im running the tutorial to make sure everything works and ran into an error.
For multivariate analysis you run this line:
imstack = aoi.stat.imlocal(nn_output, coordinates, window_size=32, coord_class=1)
With numpy 1.26.0 this outputs the following error:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[28], line 1
----> 1 imstack = aoi.stat.imlocal(nn_output, coordinates, window_size=32, coord_class=1)
File /opt/tljh/user/envs/atomai/lib/python3.10/site-packages/atomai/stat/multivar.py:87, in imlocal.__init__(self, network_output, coord_class_dict_all, window_size, coord_class)
85 self.nb_classes = network_output.shape[-1]
86 self.coord_all = coord_class_dict_all
---> 87 self.coord_class = np.float(coord_class)
88 self.r = window_size
89 (self.imgstack,
90 self.imgstack_com,
91 self.imgstack_frames) = self.extract_subimages_()
File /opt/tljh/user/envs/atomai/lib/python3.10/site-packages/numpy/__init__.py:324, in __getattr__(attr)
319 warnings.warn(
320 f"In the future `np.{attr}` will be defined as the "
321 "corresponding NumPy scalar.", FutureWarning, stacklevel=2)
323 if attr in __former_attrs__:
--> 324 raise AttributeError(__former_attrs__[attr])
326 if attr == 'testing':
327 import numpy.testing as testing
AttributeError: module 'numpy' has no attribute 'float'.
`np.float` was a deprecated alias for the builtin `float`. To avoid this error in existing code, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here.
The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:
https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
The fix is easy but needs to be implemented.
Looks like a couple of tests are failing because np.int
and np.float
were deprecated. We should use simply int
and float
.
We may want to add the following neural network architecture to nets/fcnn.py module: https://www.pnas.org/content/115/2/254
@aghosh92 Can you please take a look at it?
The download links for the training and testing datasets in AtomicSemanticSegmention.ipynb do not work anymore. Please update the download links, thanks!
Line 357 in trainer.py
gpu_usage = gpu_usage_map(torch.cuda.current_device())
may result in FileNotFoundError.
Work-around is to just use Try Except block to bypass error.
Windows may view as unsafe command.
Function: create_lattice_mask
File: imgen.py
Issue: XY coordinates close an edge can cause an error if their gaussian extends past the boundary.
Potential Solution:
This can be avoided by including cases for the edges to remove the sections of the mask which extend beyond the image boundary and limit the indices to within the image width and height.
Additionally, the previous values of the mask are added to the new mask to allow smoother transitions between overlapping gaussians.
This sort of a solution can be seen in the image below.
Passing a custom data loader in AtomAI models can be a useful feature. This would allow using e.g. Kornia data augmentation pipelines. It could look like this
segmodel = aoi.models.Segmentor(nb_classes=3)
segmodel.fit(train_loader=custom_train_loader, test_loader=custom_test_loader). # instead of passing X_train, y_train, X_test, y_test
It will also be helpful to have a utility function that builds a dataloader with Kornia data augmentation functions.
The semantic segmentation models need to be extended to work with 3D data. This should be very straightforward - just introduce an option to select between 1D, 2D, and 3D cases to ConvBlock, UpsampleBlock, etc.
@aghosh92 Is this something you'd be interested in?
Training a VAE on grayscale image data when the "conv_encoder" and "conv_decoder" arguments of the VAE constructor method are set to True results in NaN values for the reported training loss. Printing the train loss history also shows an array filled with nan values.
I have attached an image which shows an example of the input data and the report training loss:
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.