Giter VIP home page Giter VIP logo

fast4dreg's Introduction

drawing

Overview

Fast4DReg is a Fiji macro for drift correction for 2D and 3D video and is able to correct drift in all x-, y- and/or z-directions. Fast4DReg creates intensity projections along both axes and estimates their drift using cross-correlation based drift correction, and then translates the video frame by frame (Figure 1). Additionally, Fast4DReg can be used for aligning multi-channel 2D or 3D images which is particularly useful for instruments that suffer from a misalignment of channels.

example1

Fast4DReg tools

Fast4DReg consists of four scripts, and two sub-menus. The Legacy 2D menu stores estimate and apply functions used in the script. These function are macro callable. NanoJTable IO menu stores functions for opening and saving the NanoJ tables, also macro callable.

image

Fast4DReg scripts

  • time_estimate+apply: This script estimates drift in a 2D or 3D video and applies the correction to the same dataset. Correction can be performed for a single or multiple 2D and/or 3D videos in batch. The script will also output a settings.csv file with relevant script parameters and file paths. This file can be use to correct a similar drift in another dataset, for example another channel.

  • time_apply: This script will apply a drift correction from settings.csv a to another video with same frame number.

  • channel_estimate+apply: This script estimates misalignement of channels in a 2D or 3D dataset and applies the correction to the same dataset. Correction can be performed for a single or multiple 2D and/or 3D videos in batch. The script will also output a settings.csv file with relevant script parameters and file paths. This file can be use to correct a similar drift in another dataset.

  • channel_apply: This script will apply a drift correction from settings.csv a to another multi-channel image.

Drift correction workflow

xy-correction (2D and 3D)

  1. First Fast4DReg creates intensity lateral projections (average or maximum) at each time point to create 2D videos. This part is skipped if using 2D videos.
  2. Second, Fast4DReg uses a drift correction algorithm to estimate the linear x-y drift between two images by calculating their cross-correlation matrix (CCM). The location of the peak intensity in the CCM determines the linear shift between the two images. Depending on the data, either the first frame or the previous frame of the raw data can be set as the reference frame.
  3. Once the drift is estimated, the dataset can be directly corrected frame by frame according to the amount of estimated drift.

z-correction (3D only)

  1. Fast4DReg creates lateral intensity projections (average or maximum) at each time point to create 2D videos along the y- or z-axis.
  2. Fast4DReg uses a drift correction algorithm to estimate the linear z-drift between two images by calculating their cross-correlation matrix (CCM). The location of the peak intensity in the CCM determines the linear shift between the two images. Depending on the data, either the first frame or the previous frame of the raw data can be set as the reference frame.
  3. Once the z-drift is estimated, the dataset can be directly corrected frame by frame according to the amount of estimated drift.

If using multichannel images, the channels need to be split. The drift will be estimated according to the channel that has more stable structures (for example endothelium instead of migrating cancer cells). The drift correction can then be applied to the second (or more) channels by using the time_apply script.

image

Figure 1: Fast4DReg pipeline. Fast4DReg corrects for drift in x-y direction by first creating intensity projections to create 2D videos. Then it estimates the linear x-y drift between two images by calculating their cross-correlation matrix and applying the correction to the stack. To correct the drift in the z-direction Fast4DReg creates frontal intensity and corrects the drift as described above. Lateral and axial drift corrections can also be used independently. Fast4DReg outputs a folder containing the corrected images, drift plots, a drift table and a settings file that can be applied to correct another image with the same settings.

Installation

Fast4DReg

Fast4DReg is easy to istall by enabling the Fast4DReg update site:

  • Open ImageJ
  • Navigate to Help -> Update -> Manage update sites
  • Select Fast4DReg

image

  • When selected select Close and Apply changes.
  • Restart Fiji.

Dependencies

Fast4DReg is dependent on Bio-Formats, which can be installed through the Fiji update site:

  • Open ImageJ
  • Navigate to Help -> Update -> Manage update sites
  • Select Bio-Formats

image

  • When selected select Close and Apply changes.
  • Restart Fiji.

Step-by-step walkthrough

Estimate and apply drift

Before starting

Prepare your image to have one channel. If you have multiple channels they can all be in the same folder as separate files.

Running the script

  1. Open the "time_estimate+apply" from the Fiji Plugins menu: Plugins -> Fast4FReg -> time_estimate+apply. The user interface opens.

image

Figure 2: Estimate and apply user interface

  1. In the user interface
  • Experiment number: Will be used for the output folder identifier.
  • Select the path to the file to be corrected: Navigate to your image by using Add files.. or drag and drop into the white box. Files can be 2D and 3D videos or a mixture of those.
  • xy-drift correction: if you want to correct for xy-drift, tick the xy-drift correction box.
  • Projection type: Select the projection type used for xy-drift estimation (maximum or average intensity). Average projection usually works better for very noisy images.
  • Time averaging: This sets the number of frames to average together to make coarser time points on which the cross-correlation analysis will be run to calculate drift. Setting this value to 1 will calculate straight frame-to-frame cross-correlations and while this should capture drift very accurately, it will also be very susceptible to noise. Conversely, setting this value high will average out noise but will also give a lower sample of the drift (which is then interpolated).
  • Maximum expected drift: This refers to the maximum expected drift between the first frame of the dataset and the last frame of the dataset in units of pixels. Setting this to 0 will allow the algorithm to automatically determine the drift without any limitations. It is only really worth changing this value from 0 if running the algorithm gives incorrect results with too large jumps in estimated drift.
  • Reference frame: If this is set to ‘first frame (default, better for fixed)’ then every averaged group of frames will be compared to the first average group of frames to calculate drift. If this is set to ‘previous frame (better for live)’ then every averaged group of frames will be compared to the previous averaged group of frames. For static samples, it is best to compare to the first frame, and for live samples where there may be slow scale drift overlaying the faster scale sample motion, it is better to compare to the previous frame.
  • Crop output: Crop output will crop out the black frame created by the image moving.
  • z-drift correction: If you want to correct for z-drift, tick the z-drift correction box.
  • Reslice mode: Reslice mode lets you decide if you want to create the projection along the x-axis (top) or y-axis (left).
  • Projection type: Select the projection type used for z-drift estimation (maximum or average intensity). Average projection usually works better for very noisy images.
  • Extend stack to fit: Extend stack to fit will create extra slices to the stack to ensure that the whole stack is saved while moving up and/or down.
  • Save RAM: If ticked the image will be converted to 32-bit, but the original bit-depth is kept. This saves RAM and speeds up the process.
  1. Click ok. The script will run.

  2. When the script has completed the process, you will have the following files in a new folder: - corrected images - drift plots - drift tables - a settings file, that you can use to run the script on another image with identical parameters (e.g. other channel).

    The folder will have an unique identifier: fileName + date + experiment number. If you plan to apply the correction to another channel, make sure not to move these files to another folder.

Apply drift

  1. Open the "time_apply" from the Fiji Plugins menu: Plugins -> Fast4FReg -> time_apply. The user interface opens.

image

Figure 3: Apply user interface

  • Select the path to the file to be corrected: Navigate to your image by using Add files.. or drag and drop into the white box. The files can be located in different folders.

  • Select the settings file (csv.): Navigate to your settings file (called settings.csv).

  • Select where to save the corrected images: All corrected images will be saved to this folder.

  1. Click ok.

Done!

Known issues

Importance of file locations

Make sure not to move the drift table from the results folder as the path to the drift table is hardcoded to the settings.csv file.

Result images are black

Try disabling time averaging (1).

Operating system specificity

If you create a settings file with the estimate+apply -script and use it to correct other image by using the apply-script, make sure you use the same operating system (i.g PC, Mac, Linux).

Contributors

When using this script, please cite the NanoJ paper and our pre-print

Laine, R. F., Tosheva, K. L., Gustafsson, N., Gray, R., Almada, P., Albrecht, D., Risa, G. T., Hurtig, F., Lindås, A. C., Baum, B., Mercer, J., Leterrier, C., Pereira, P. M., Culley, S., & Henriques, R. (2019). NanoJ: a high-performance open-source super-resolution microscopy toolbox. Journal of physics D: Applied physics, 52(16), 163001. https://doi.org/10.1088/1361-6463/ab0261

Pylvänäinen J.W., Laine, R. F., Ghimire, S., Follain G., Henriques, R & Jacquemet G. (2022). Fast4DReg: Fast registration of 4D microscopy datasets. bioRxiv 2022.08.22.504744; https://doi.org/10.1101/2022.08.22.504744

Change log

230923 Version 2.1 (the clean 'n fast dimensionalist)

Made compatible with headless execution.

230112 Version 2.1 (the clean 'n fast dimensionalist)

Fast4Dreg code has been cleaned and the RAM saving mode improved. The RAM saving mode now runs significantly faster than before.

221014 Version 2.0 (the dimensionalist)

Fast4DReg runs now independent of NanoJ and is able to correct drift in both, 2D and 3D images.

220222 Version 1.0 (pre-print)

All four scripts have been tested and work well. Ready for pre-print.

fast4dreg's People

Contributors

jpylvanainen avatar brunomsaraiva avatar guijacquemet avatar djpbarry avatar

Stargazers

Samuel Degnan-Morgenstern avatar  avatar José Agustín Moreno-Larios avatar Andrew Lutas avatar  avatar Manon Lesage avatar  avatar  avatar Surya avatar Liyuan Wang avatar Luís Bastião Silva avatar Nicholas Condon avatar Patrick J Zager avatar  avatar AlvinLeeves avatar  avatar  avatar  avatar Marco Dalla Vecchia avatar Christoph Budjan avatar  avatar Mallory Wittwer avatar Bruno C. Vellutini avatar Junel Solis avatar Luke Hammond avatar Zac Swider avatar Raouf Barboza avatar weize avatar Genevieve Buckley avatar Fariha Annesha avatar Pradeep Rajasekhar avatar Jean-Yves Tinevez avatar  avatar Robert Haase avatar

Watchers

Ricardo Henriques avatar James Cloos avatar  avatar  avatar

fast4dreg's Issues

Time estimate/apply underestimates jump in xy drift (maybe a s/n or NanoJ limitation?)

Hello-

I am trying to use Fast4DReg to correct drift in 2D GCaMP imaging experiment, where I have several thousand frames imaged both before and after a manipulation. Drift within the pre and post acquisitions is pretty minimal, but there is a visually apparent several pixel shift between the pre and post acquisitions. Trying to correct across this shift with Fast4DReg doesn't seem to work - even after correction, the shift is still there, and is estimated at ~0.03 px in x and y, when measuring by hand it's more like 5+ pixels.

I have similar results when trying to just use the NanoJ-Core plugin to drift correct the same data, which is why I suspect perhaps this is a limitation of the cross-correlation algorithm on this dataset. The images are definitely low signal/noise (maximum intensities ~120-130 on a minimum of ~100), which I see in the paper can cause the algorithm to perform poorly. However even if I collapse the data to 2 frames with a max intensity projection of the pre and post and try to correct across it with Fast4DReg, the drift is not corrected; similarly if I turn on time averaging to say 100 frames and correct the time series, the jump persists (and should be between averaged frames, so not just averaged out). StackReg does fine with this dataset with either rigid body or translation, but is quite slow, whereas Fast4DReg is comparatively fast (8 minutes instead of a few hours for 6K frames) so I'd like to use it, but am struggling to understand if my data are just limited for this particular algorithm, or if there is a piece of data preparation I am missing or can change.

Thanks

Image showing s/n ratio is low
Screen Shot 2023-02-24 at 11 07 02 AM

Shift between pre and post is ~5-8 pixels (mostly in Y)
Screen Shot 2023-02-24 at 11 11 59 AM

Calculated shift is only ~0.03 pixels
Screen Shot 2023-02-24 at 11 13 50 AM

Error: n frames in drift table != n frames in image

Apologies for not posting this sooner. When trying to run fast4DReg, I get the error below in the bug report. This is my first Github issue submission, so I hope I have uploaded the image correctly with the error message. I have pasted the debug and log outputs below as well. There is a note in the log that NanoJ has detected a composite image. This is weird because the image has a single channel (stack dimensions: width =1408, height = 1040, channels = 1, slices = 1, frames = 10).

Bioformats is installed.

NanoJCore is installed, and NanoJCore>Drift Correction>Estimate Drift (checking Apply Drift) seems to work fine.

fast44DReg error img

The debug output is pasted below.
Name * Value
Memory * 281MB of 14500MB (1%)
nImages() * 8
getTitle() * "DUP"
extend_stack_to_fit (g) * 1
time_z (g) * 1
crop_output (g) * 1
projection_type_xy (g) * "Max Intensity"
z_registration (g) * 0
reference_z (g) * "first frame (default, better for fixed)"
max_z (g) * 0
my_file_path (g) * "C:\tempImages\SBR010_HeLa\SBR010concatenated_ch1_B4_10.tif"
max_xy (g) * 0
ram_conservative_mode (g) * 0
projection_type_z (g) * "Max Intensity"
exp_nro (g) * 1
time_xy (g) * 1
XY_registration (g) * 1
reslice_mode (g) * "Top"
reference_xy (g) * "previous frame (better for live)"
MonthNames * array[12]
year * "2022"
month * 8
week * 6
day * 17
hour * 6
min * 47
sec * 5
msec * 476
timeStamp * "2022-Sep-17-001"
filename_no_extension * "SBR010concatenated_ch1_B4_10"
results * "C:/tempImages/SBR010_HeLa/SBR010concatenated_ch1_B4_10_2022-Sep-17-001"
settings_file_path * "C:/tempImages/SBR010_HeLa/SBR010concatenated_ch1_B4_10_2022-Sep-17-001\SBR010con..."
DriftTable_path_XY * "C:/tempImages/SBR010_HeLa/SBR010concatenated_ch1_B4_10_2022-Sep-17-001\SBR010con..."
DriftTable_path_Z * "C:/tempImages/SBR010_HeLa/SBR010concatenated_ch1_B4_10_2022-Sep-17-001\SBR010con..."
t_start * 1663422425490.0000
options * "open=[C:\tempImages\SBR010_HeLa\SBR010concatenated_ch1_B4_10.tif] autoscale colo..."
width * 1408
height * 1040
channels * 10
slices * 1
frames * 1
thisTitle * "SBR010concatenated_ch1_B4_10.tif"
i * 0

Error: Number of frames in drift-table different from number of frames in image... in line 200:

	run ( "Correct Drift" , "choose=[" + DriftTable_path_XY + "DriftTable.njt]" <)> ; 

Log Output:

My file path: C:\tempImages\SBR010_HeLa\SBR010concatenated_ch1_B4_10.tif
Estimating the xy-drift....
xy-drift table path: C:/tempImages/SBR010_HeLa/SBR010concatenated_ch1_B4_10_2022-Sep-17-001\SBR010concatenated_ch1_B4_10-Max Intensity_xy_

Applying the xy-correction to the stack....
WARNING: detected composite image. NanoJ is optimized for pure image stacks and may yield errors.
ERROR: java.lang.RuntimeException: Macro canceled
in DialogListener of nanoj.core.java.gui.BaseDialog$MyDialogListener@198676c5
at ij.macro.Interpreter.error(Interpreter.java:1404)
from ij.macro.Interpreter.abort(Interpreter.java:2152)

Error correcting channels in 3D dataset

Hello,

I am using Windows 10 with updated version of FIJI 1.54f. Trying to align four channels of a 3D dataset. Here is my setup:
image
and the error:
image

Debug.csv
Exception.txt

I can share the images if that would help troubleshoot. All the images are of the same dimensions (17x9969x9487)

Step by Step Walkthrough out of date

The step by step walkthrough appears to reference deprecated code / modality of work.

The line: "Open the "estimate-drift" script and click run. The user interface opens." in the readme should probably be correct to use the current script names & perhaps and indication of where/how to find them in the menu.

Stack to hyperstack error

Hi-

I've tried to run the estimate+apply on a single channel image with 6 time points and 29 z slices per time points (hyperstack). On running the plugin and choosing the best for live options, the script errors at line 289

Screen Shot 2022-10-14 at 4 01 15 PM

I played around with copying the macro and editing lines, and it seems like the issue is arising from the various sections that are swapping channels to time. If I comment all of these out, I have no issues. I was also getting lots of NanoJ warnings about it being optimized for pure stacks with the base plugin that went away when I commented out the hyperstack reordering.

I didn't try too hard to follow why the stacks might get reordered. Is this an issue with my input file format, or something I'm missing in the file organization? From what I'd read, the plugin would take a c=1 z=n slices t=n frames hyperstack.

Here is the debug output:

Memory * 1920MB of 7749MB (24%)
nImages() * 8
getTitle() * "AllStarStack"
extend_stack_to_fit (g) * 1
time_z (g) * 1
crop_output (g) * 1
projection_type_xy (g) * "Max Intensity"
z_registration (g) * 1
reference_z (g) * "previous frame (better for live)"
max_z (g) * 0
max_xy (g) * 0
ram_conservative_mode (g) * 0
projection_type_z (g) * "Max Intensity"
files (g) * array[1]
exp_nro (g) * 1
time_xy (g) * 1
XY_registration (g) * 1
reslice_mode (g) * "Top"
reference_xy (g) * "previous frame (better for live)"
MonthNames * array[12]
year * "2022"
month * 9
week * 5
day * 14
hour * 15
min * 59
sec * 24
msec * 620
timeStamp * "2022-Oct-14-001"
t_start * 1665777564620.0000
p * 0
p_start * 1665777564620.0000
options * "open=[/Users/(username)/Desktop/untitled folder/C3.tif] autoscale color_mode=Default ..."
width * 1024
height * 1024
channels * 6
slices * 29
frames * 1
filename_no_extension * "C3"
results * "/Users/(username)/Desktop/untitled folder/C3_2022-Oct-14-001/"
settings_file_path * "/Users/(username)/Desktop/untitled folder/C3_2022-Oct-14-001/C3_settings.csv"
DriftTable_path_XY * "/Users/(username)/Desktop/untitled folder/C3_2022-Oct-14-001/C3-Max Intensity_xy_"
DriftTable_path_Z * "/Users/(username)/Desktop/untitled folder/C3_2022-Oct-14-001/C3-Max Intensity-Top_z_"
thisTitle * "C3.tif"
i * 29


Error: channels x slices x frames <> stack size in line 289:

	run ( "Stack to Hyperstack..." , "order=xyctz channels=1 slices=" + slices + " frames=" + frames + " display=Color" <)>...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.