yijiang1 / fold_slice Goto Github PK
View Code? Open in Web Editor NEWElectron/X-ray ptychography and tomography/laminography
Home Page: https://chat.openai.com/g/g-YKQKluCy9-foldslicegpt
Electron/X-ray ptychography and tomography/laminography
Home Page: https://chat.openai.com/g/g-YKQKluCy9-foldslicegpt
Currently the best way to correct large position errors in ptychography is to do low-resolution reconstruction using cropped data. However, sometimes reconstruction may fail if diffraction patterns are too small, preventing position correction at large pixel size.
I think it's possible to do coarse (or any arbitrary pixel size) during the position correction step by resizing the object and the probe. Thus one can use large diffraction patterns and still perform efficient position correction.
The TV minimization in GPU engines currently uses the Chambolle algorithm, which seems to be too aggressive with the default parameters. It's worth testing some other TV algorithms.
Here is a good review: https://doi.org/10.1016/j.softx.2019.04.003
In ptycho_solver.m, the loop "ll = 1:max(par.object_modes, par.Nscans)" for object constraints (e.g. apply_smoothness_constraint) will result in error if multiple scans share the same object. In this case, the number of object modes is less than par.Nscans.
Need to find a way to determine the correct number of objects based on eng. share_object.
A collection of Matlab bugs people may encounter when using fold_slice:
if using 1 layer and GPU_MS engine, the probe is not in the center but the corner. The object is basically random.
of course we can use GPU instead of GPU_MS, but I think it's good to make single slice work in GPU_MS engine too.
This could be useful for resolving layer structures in multi-slice ptychographic reconstruction
https://arxiv.org/abs/2102.00869
If diffraction patterns are removed based on p.avg_photon_threshold or p.avg_photon_threshold_ub, loading initial scan positions from .mat files will not work because the number of positions is different from the number of diffraction patterns.
Add a feature to try loading a ptychographic reconstruction several times before throwing out an error. This could be useful for large-scale tomography processing and when the data drive is unstable.
Record the pixel size of each ptychographic reconstruction when loading them for tomography.
This is useful if ptycho reconstructions have different pixel sizes.
I think it's possible to make use of existing code to optimize sample thickness (delta_z) during GPU_MS reconstruction.
Functions worth checking out:
+GPU_MS/LSQML.m
+GPU_MS/private/gradient_NF_propagation_solver.m
+ML_MS/gradient_ptycho_MS.m
+ML_MS/calc_ms_grad.c
It might be useful to do position correction on multiple object layers in multi-slice ptychography.
One way is to calculate position updates using each object layer and then "average" them at the end of an iteration.
Add wrapper functions and an example script for an automatic parameter tuning workflow based on Bayesian optimization with Gaussian processes.
These functions can be used with external python libraries and enable automatic optimization of reconstruction and experimental parameters in electron/X-ray ptychography. For more details, see https://arxiv.org/abs/2204.11815.
the matrix for storing bad pixels, fmask is always the same size as all the diffraction patterns, 3D. For data as large as 256^4, the fmask can use memory (GPU!) up to 16 GB! For most of the data, fmask may not change for different diffraction, then fmask should only be a single 2D matrix instead of 3D. Even if fmask is pattern dependent, bad pixels are only a small fraction. Using the indexes of bad pixels instead of a 3D matrix with many ones can save a lot of memory.
it seems some diffraction patterns are never used if group size is too large.
https://www.osapublishing.org/oe/fulltext.cfm?uri=oe-26-10-12585&id=386092
Michal's current implementation has very limited use. Need to improve the GPU engine for applications at the APS.
Checkpoints:
Scan positions
Algorithm:
all modes show as probe 1
The online FSC estimation code in +engines/+GPU_MS/+analysis/ does not work because the engine no longer supports multiple-scan reconstruction.
Need to use external scripts to calculate FSC score for multi-slice reconstructions.
Functions worth checking out:
ptycho/+core/+analysis/calc_FSC.m and aligned_FSC_template.m
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.