Giter VIP home page Giter VIP logo

qpg-mit / slmsuite Goto Github PK

View Code? Open in Web Editor NEW
42.0 3.0 11.0 2.59 MB

Python package for high-performance spatial light modulator (SLM) control and holography.

Home Page: https://slmsuite.readthedocs.io

License: MIT License

Python 100.00%
holography optics photonics slm spatial-light-modulator adaptive-optics cgh computer-generated-hologram computer-generated-holography gerchberg-saxton gerchberg-saxton-algorithm hologram phase-retrieval wavefront-sensing

slmsuite's Introduction

qp-slm

High-Performance Spatial Light Modulator Control and Holography

Documentation Status License: MIT Code style: black

slmsuite combines GPU-accelerated beamforming algorithms with optimized hardware control, automated calibration, and user-friendly scripting to enable high-performance programmable optics with modern spatial light modulators.

Key Features

Installation

Install the stable version of slmsuite from PyPi using:

$ pip install slmsuite

Install the latest version of slmsuite from GitHub using:

$ pip install git+https://github.com/QPG-MIT/slmsuite

Documentation and Examples

Extensive documentation and API reference are available through readthedocs.

Examples can be found embedded in documentation, live through nbviewer, or directly in source.

qp-slm

slmsuite's People

Contributors

cpanuski avatar ichristen avatar lianebernstein avatar tpr0p avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

slmsuite's Issues

_phase2gray

Hello,

I have a question for the _phase2gray() method in the slm.py file. There, we pass self.display as the out variable, and self.display has (depending on the SLM bit-resolution) a data-type np.unint8 or np.uint16.

Later, the float -> int conversion gets enforced with np.copyto(*args, casting="unsafe").
Two questions:

  1. When out in np.copyto has 8-bit unsigned integer types, I think the subsequent call of np.bitwise_and() becomes irrelevant.
  2. np.copyto casts e.g. 1.9 -> 1. Shouldn't 2 be desired?
  3. Also, casting negative integers to unsigned types can lead to unexpected behavior, e.g. -162 -> 256 - 162 = 94

I hope I got this right. Is the reason for using np.copyto speed because it's in-place? Otherwise, it seems a bit unsafe to do.

`Filesystem` camera

For cameras lacking the option of a native python interface, a desired method for control would be hardware triggering through python and then retrieving data through saved images in the filesystem. Care will have to be taken to make this interface general enough to be useful.

Stored SLM `phase` includes `phase_correction`

The phase attribute of the SLM is overwritten every time slm.write() is used. When phase_correct=True (default), the phase_correction (if not None) is added to this. However, if the user stores the phase with stored = slm.phase.copy() and tries to slm.write(stored), the phase_correction will effectively be applied twice and might lead to unexpected results. Perhaps the sorted phase should not include the phase_correction?

Cameras update

Dashboard for the camera-related updates planned.

New interfaces:

Polished abstract functionality:

  • #33 (though this could maybe be split into a separate update),
  • #55 ,
  • Revamp to how get_image() works with new interfaces: instead new cameras will write _get_image_hw() (in the same manner as write() and _write_hw() for SLMs),
  • Polishing the trigger and grab formalism in get_image() and forcing this functionality to be abstract.

Potentially:

  • Live pyglet viewer of the camera (alongside #7 ),
  • More careful pipelining of the data to GPU.

3D Spots

GS-type algorithms are not fundamentally constricted to using DFTs. Another implementation of GS might consider holography at a handful of points in the farfield ($k$-space), where for each point the equivalent farfield transformation is performed and the remaining points which would have been included in the "knm" DFT grid farfield (zeroed anyway during the GS loop) are disregarded. There are three distinct advantages of such an approach:

  • This point-specific farfield transformation can include depth (hence 3D spots). Other cool things are planned.
  • The 2D or 3D coordinates of the transformation are floating point, which means that memory does not have to be spent to pad a 2D DFT grid if greater effective resolution is desired.
  • Speed, in the case of a small number of points (the cutoff is not yet benchmarked, but probably above 1000 points; this will depend on memory constraints and whether a GPU is used).

TODOs:

  • FreeSpotHologram (name might change) basic structure.
  • Overridden transfer functions for mapping to the point-farfield.
  • Superclass-compatible logic.
  • Estimate of N_batch_max from available memory.
  • Modify 2D spot checks and other functions to include 3D case.
  • Maths for normalized 3D depth, and transformations between units.
  • Determine best hierarchy sharing with SpotHologram.
  • Raw GPU kernels for speed.
  • Mapping feature.
  • Testing.
  • Documentation.

More Santec functions should be available in the python interface.

See SLMFuncDLL_Programmer’s_Guide(v2.0).pdf

  • More robust/complete error checking and display (ReadEDO, Table 3.4-1 SLM_STATUS).
    • Esp. for wavelength setting, issue #11 .
  • Useful info:
    • IDs (ReadSDO)
    • Temperature (ReadT)
  • Saving patterns to on-board memory (WriteMC, WriteMI, WriteMT, WriteMR, WriteMP, WriteMZ, WriteMW), trigger (WriteTI, WriteTM, WriteTC, WriteTS)
    • Partially implemented, but untested.

WOI Revamp

Changing the WOI or window of interest (also called ROI or region of interest) can speed camera exposure time. However, this functionality is only thinly implemented currently. This revamp would involve:

  • More uniform implementation of WOI methods.
  • Updating Fourier calibration data/etc upon WOI change (or some other equivalent solution).

Holoeye

Add a Holoeye interface for their SDK. Interestingly, this appears to have GPU functionality; it might be interesting to try to connect this with CUDA/cupy.

Santec wavelength table sometimes fails.

Setup
OS: Windows 10 Home
Python: 3.8.8
Conda: 4.10.3

(Have also seen this sometimes on other computers with python ~ 3.9, 3.10, windows 10, 11)

Issue
Sometimes, the Santec wavelength table does not set to the correct value. See that the wavelength remains the old value and the phase is oddly at 0pi. Sometimes, the wavelength becomes 0 nm.

Proposed Resolution

  • Debug to see if we're using pointers incorrectly or something, or need to pause or something.
  • If the error is stochastic, add error checking to keep attempting to set the correct value until it's right.

target in `self._update_weights_generic` for experimental feedback

Hey,

I was experimenting with the method SpotHologram._update_weights().
One thing I noticed is that when using feedback=experimental, the target is defined by self.spot_amp instead of self.target.
When I calculate the norm of self.spot_amp using Hologram._norm(), this norm isn't unity.

I wonder if that's an issue as in Hologram.update_weights_generic() we divide (that is, compare) the value of self.spot_amp with the (normalized) feedback value self.feedback_corrected. If one is normalized but the other isn't, the factors by which we scale the weights will all be below (or above, depending on spot_amp) unity.
In comparison, when using feedback=computational, the target self.target is normalized and the scale factors are either below or above unity, as expected.

Apart from this (I don't know if this is the proper place to ask), I tried to use camera feedback and noticed that the non-uniformity of the spots increases (the image diverges). I did some checks and ensured that the weight updates are reasonable (e.g. the weight of a spot that is very dim gets increased by a large amount), and tried out different gains (power=0.1, power=1.5, and 0.1 steps in between). Debugging with take also yielded good results, the spots are centered in their crop boxes and the feedback values are reasonable. Also the computational feedback works flawlessly. Perhaps you had similar divergence issues and there are some quick fixes that one can try?

Thanks

Prerelease Nice-To-Haves

Nice to haves:

  • Test SLM blitting speed
    • Add a function to write a video to the SLM, potentially triggered (moved to #12).
  • take wavefront_calibrate helpers outside of function, make them private static methods - @ichristen thinks this is low priority, as these functions are never seen by the user. Maybe this can be done after release, if it's clean.
  • automatic black formatting (edit: future problem. not now)
  • make a config file for dll paths, possibly add an install script (will revisit based on user feedback).
  • Make logo dark-theme compatible.
  • Resolve verboseness: consider a flag in each class with default verboseness.
    • Simplified this a bit. Removed saving entirely.
  • Example or guide for aligning SLMs.
  • Implement FILR cameras (moved to #20 ).
  • Add info() functions to help identify cameras and displays (class methods).
    • Done for SantecSLM, but not for others.
  • Free running camera window, for alignment (need to think carefully about this). (Moved to #7 and pygletcamera branch 381af3e)
  • Add links (DOI) to literature references.
  • thresh vs threshold?
  • .rst uses test (two grave) for monospace instead of test (one grave). Convert docstrings to the former.
  • hone option for Fourier calibration, to fit to FFT and convolution -produced guess.
  • Fourier calibration should error more elegantly when over or under exposed.
  • Method to measure SLM settle time.
    • (partially implemented)
  • Resolve nested loop tqdm in wavefront calibration.
  • Rename lcos_toolbox to toolbox and image_analysis to analysis
  • Make the Zoom axis in plot_farfield red to match the red box.
  • Label the yellow camera box in plot_farfield. Probably with a legend.
  • Consider removing cv2 dependencies in favor of numpy/scipy (esp. cupy-accelerated). Decided against.
  • Should plots have white backgrounds instead of transparent? (edit: future problem. not now)
    • Maybe the plot keyword can be passed a dict which is then passed to all figures/etc. (edit: future problem. not now)
  • Look into allowing external people to test the package on a live SLM (e.g. basement setup). (decided against)
  • Add SLM tips page (setup, alignment, screen timeout, mouseover errors, and other institutional knowledge)
  • Add FAQ page (moved for later, Tips for now).
  • Stats plotting colors
  • Unified figure size / etc solution, esp. for outside of jupyter (moved to future work, once someone complains).
  • Add option to wavefront_calibration() to only gather measured_amplitude.

Add different transfer functions

slmsuite currently focuses on holography entirely in the Fourier domain, where the transfer function is a simple Fourier transform. However, one can do more cool things with other transfer functions. Even though the goals of Odak are a bit different from slmsuite, Odak has a good implementation of the different sorts of transfer functions which might be of interest.

Axicon

Add an axicon() phase pattern to slmsuite.holography.toolbox. Will need to think carefully about how to define the "focal length".

slmsuite.holography.toolbox.smallest_distance unexpected behavior

smallest_distance returns different value depending on ordering of vectors argument.

MWE:

import numpy as np
import matplotlib.pyplot as plt
import slmsuite.holography.toolbox as toolbox

# generate points
shape = (2, 2)
p0 = (0, 0)
p1 = (10, 0)
p2 = (0, 10)
p3 = (10, 10)
points_top_right = np.around(toolbox.fit_3pt(
    p3,
    p1,
    p2,
    N=shape,
    x0=(1, 1),
    x1=(1, 0),
    x2=(0, 1)
)).astype(np.uint64)
points_bottom_left = np.around(toolbox.fit_3pt(
    p3,
    p1,
    p2,
    N=shape,
    x0=(0, 0),
    x1=(1, 0),
    x2=(0, 1)
)).astype(np.uint64)
# plot
plt.scatter(points_bottom_left[0, :], points_bottom_left[1, :], alpha=0.5, color="red", s=100)
plt.scatter(points_top_right[0, :], points_top_right[1, :], alpha=0.5, color="blue", s=10)
xoff, yoff = 0.5, 0.5
for idx in range(np.prod(shape)):
    name_str = "{}".format(idx)
    xtr, ytr = points_top_right[:, idx]
    plt.text(xtr + xoff, ytr + yoff, name_str, color="blue")
    xbl, ybl = points_bottom_left[:, idx]
    plt.text(xbl - xoff, ybl - yoff, name_str, color="red")
ll = -1
lh = 11
plt.xlim(ll, lh)
plt.ylim(ll, lh)
plt.show()
# smallest distance
dtr = toolbox.smallest_distance(points_top_right)
dbl = toolbox.smallest_distance(points_bottom_left)
print("distance_top_right: {}, distance_bottom_left: {}".format(dtr, dbl))

Prerelease TODOs

Assorted TODOs:

  • pypi release (@tpr0p)
    • get versioning working
    • setup.py
    • (wait for all TODOs below this)
  • license update (@cpanuski)
  • README (@tpr0p )
    • uncomment pypi badge (wait for pypi release)
  • read the docs hosting of the docs (@tpr0p)
    • build stable version (wait for pypi release)
    • link examples in docs (maybe sphinx-gallery)
  • README, license, etc. for slmsuite-examples (@tpr0p)
  • License (@cpanuski )
  • Other documents
  • documenting (D), linting (L)--checkmark for ready for release.
    • why pages (@cpanuski)
    • holography: (@ichristen )
      • algorithms.py
      • image_analysis.py
      • spot_array.py
      • lcos_toolbox.py
    • hardware: [D, L]
      • cameraslm.py
      • Cameras:
        • camera.py
        • alliedvision.py
        • cheetah640.py
        • flir.py
        • mmcore.py
        • tlcamera.py
      • SLMs:
        • slm.py
        • screenmirrored.py
        • santec.py
    • misc: [D, L]
      • fitfunctions.py
      • files.py
  • Examples:
    • Bugtest examples
      • computational holography (@cpanuski -- text outline)
        • GS vs WGS
      • experimental holography (@tpr0p)
        • how to load hardware
        • Single spot and blazes
        • fourier calibration: process, spots on desired pixel & error (+/or arrays of spots & mean error?)
        • maybe split here
        • WGS vs experimental WGS: camera feedback on spot array
        • talk about take
      • Pictorial holograms (@ichristen )
      • wavefront calibration
      • structured light (@ichristen )
        • talk about imprint
    • Integrate into docs
  • @tpropson thinks all TODOs should be in comments, not docstrings
  • Check uniform use of `x` vs ``x`` (see nice-to-have ".rst uses test (two grave) for monospace instead of test (one grave). Convert docstrings to the former.") (@ichristen )
  • Check status of doc links throughout (@ichristen )
  • all file names should be snake case - @ichristen disagrees with this see link; would prefer no underscores in module names. @cpanuski disagrees with @ichristen for the reason listed in @ichristen link: underscores improve readability of mixed acronyms (e.g. lcos, slm) with other technical nouns. Underscores are only "discouraged" in the package names (so agreed here: use slmsuite vs slm_suite; foregoing the enhanced readability). @ichristen notes that "Underscores are only "discouraged" in the package [and module] names". File and folder names are module names. A solution for lcos in lcos_toolbox might be to rename to toolbox (see nice-to-haves).
  • dark theme for logo (@ichristen )
  • fix the logo font (@cpanuski)
  • Search the package for all TODOs and resolve them. (@ichristen )
  • Perminantly move examples folder into docs? (Can't seem to import from other directory).
  • Resolve units
    • For algorithms
    • For fourier calibration
  • Are camera dx_um and dy_um neccessary?
    • Keeping in. Now these default to None.
  • Add cameras.
    • Make sure special libraries have try statements around them so they can still be built in docs.
  • Cleanup lcos_toolbox
  • Cleanup image_analysis
    • blob_detect might need work.
    • spot analysis functions might need work.
  • Cleanup fourier calibration
  • Cleanup wavefront calibration
  • Cleanup spot array.
  • Cleanup algorithms.
  • Remove misc
  • Figure out how to name one-class files. For instance, slmsuite.hardware.SantecSLM.SantecSLM() would ideally be one dir up slmsuite.hardware.SantecSLM().
    • Implement in docs
    • This seems to also have caused all the camera files to be loaded when a single one of them is. Fix this.
    • @tpr0p says that there is no way to do this in python without having everything loaded. when you do import slmsuite.hardware.santec.Santec, you load the __init__ files at each level. so you would have to leave the __init__ files blank and do slmsuite.hardware.santec.Santec
  • AlliedVision .flush() (@ichristen )
    • Stopgap implemented. Need to get software triggering working.
  • autoexpose verboseness. (@ichristen )
  • Remove SLM dkx, dky.
  • WGS:
    • “computational-spot” fixes, psf_knm
    • Check WGS convergence and pox
    • Check that WGS variants are correctly implemented.
    • speckle reduction strats
      • Moved later.
    • refine_offset() implementation to correct Fourier calibration
    • Try out non-equal SpotHologram amplitudes, also test that amplitude=0 doesn't error
  • Fourier calibration better Gaussian fit hone() (based on dev)
  • Run examples on dev (some functions are moved).
  • Pull dev into main.

`averaging` flag for cameras

Having a built-in averaging option would be very useful - e.g. turning a 8 bit camera into an effective 12 bit camera using 16 automatic averages.

Wavefront calibration fit & correct the center of the spots array

Hi,

Got two questions and one potential bug.

In the file slmsuite/holography/algorithms.py, under subclass SpotHologram, function refine_offsets(), line 2999. It seems that self.measure() function will not return anything. Thus, maybe it is intended to have something like the following.

if img is None:
    self.measure(basis="ij")
    img = self.img_ij

The first question is related to this function. The current spot array I have generated with camera feedback is not perfectly aligned to the intented grids (also seems to be the case in the doc example). I am wondering whether there is an easy way to use this refine_offsets() function within the feedback optimization to improve the precision of the location.

The second question is about the wavefront calibration. Since the camera is used as the measurement in the calibration process rather than a point detector, it seems to me that only one capture of the image would be enough to fit a sine curve and extract the phase at the the target pixel. I am just curious whether there are some reasons preventing us from doing that, or there are some advantages of stepping the phase.

Thanks in advance!

Kaizhao

CameraSLM simulation

Add ability to implement all CameraSLM functionality (e.g. wavefront correction) on a simulated camera and SLM. Useful for testing and implementing new wavefront correction algorithms.

TODO:

  • Simulated camera base implementation
  • Simulated SLM base implementation
  • Change magnification parameter to focal length in simulated camera
  • Check automatic padding computations
  • Clean up sim tutorial

Wavefront calibration options

Wavefront calibration should have options to not overwrite the previous calibration. Current behavior removes any current calibration. Taking a new calibration with the old present allows the user to validate the performance of the old calibration (e.g. lambda/200 precision).

wavefront_calibrate()

  1. Wondering whether it is typo in cameraslms.py file line 718

phase_correction = self.slm.measured_amplitude

maybe should be the following?

phase_correction = self.slm.phase_correction

  1. If I only want to get the amplitude measurement from the wavefront_calibrate(), why the returned value is from Step 1.25 in the measure() function, so before the correction of the blazing angle, rather than after step 1.5?

Issues for adding header and dll files in python 3.7 or earlier

Discussed in #5

Originally posted by YuanLi-sz October 26, 2022
Hi,

The current error i am getting when I try to initialize slm is "'slm_funcs' is not defined".
I am using python3.7 and I found in both santec.py and _slm_win.py they are using 'add_dll_directory' and i think python3.7 doesn't support this, so this should be the reason for the error. Does this conclusion seems reasonable to you? And besides switching to python3.8 or newer do you have other suggestions for fixing this issue?

Thank you very much!

pyglet 2.0.0

pyglet updated to 2.0.0 on 11/01/2022 which caused breaking changes for screen-mirrored slms. We should find a stable way to version the package's dependencies. See about doing this in setup.py, which notes dependencies for pip install git+<>. Also we should fix the screen mirrored slms to use new version of pyglet. For now one can do
pip install pyglet==1.5.27

Docs Cleanup

general stuff:

  • combine return type w/ return line in docs (decided against for now)
  • remove cv2 dependency? (decided against)
  • group all of the phase functions w/i utils? or at least sort the list. Two methods:
    • prefix phase_ so they sort alphabetically together.
      • See commit a01344a for a potential solution. Need to pull in if this looks good.
    • get sphinx to sort according to code location (this is not currently possible).
  • all debug plots should not show by default, e.g. plot=False
  • Update array_like to be more specific. e.g. numpy.ndarray<float> (height, width) (Started the conversion in some cases, but decided against in general. Shape should be clear from the docs no matter the case.)
  • Do a pass checking for typos in examples.
  • Do a pass checking for typos in other docs.

toolbox

affine_vectors:

  • change name (fit_affine)
  • add documentation

blaze:

  • kx/k formatting
  • change cross to cdot
    clean_2vectors:
  • change name for format_2vectors

determine_source_radius:

  • x/lambda formatting
    lens:
  • x/lambda formatting

hermite_gaussian:

  • format references
  • remove "untested"
  • x/lambda fix
  • nx,ny change to m,n

imprint:

  • x/lambda formatting
  • code example

ince_gaussian:

  • format references
  • x/lambda, xgrid,ygrid (this is repeated on all grids, make them all uniform)
  • needs return statement (check returns on all phase functions)

smallest_distance:

  • make computationally efficient (optional; decided against)

largest_difference_metric:

  • better name
  • make this a private function?

lens:

  • format f_x, f_y, ang
  • make angle a separate parameter "angle"

matheui_gaussian:

  • not implemented

pad:

  • rephrase "shape to pad into"
    print_blaze_conversions:
  • add use to tutorial

unpad:

  • on returns: add "depending what is passed to matrix" or similar to clarify

zernike:

  • add return

zernike_sum:

  • add return

analysis

blob_detect (good example for correctly formatted citations):

  • add options/documentation to filters (@cpanuski )

blob_array_detect:

  • parity_check - rename to "orientation_check" or similar
  • check that "img" use is consistent throughout

get_transform:

  • on return types: if function, what does that function take as input/return?
  • rename to "orientation_transform"

make8bit:

  • add underscores to name (make_8bit)
  • remove totally?

take:

  • plot isnt documented
  • points -> vectors
  • centered: default to true

take_fit:

  • remove "untested", but add note that its still being developed
  • generalize to arb. images(e.g. "taken" -> "imgs"), then add a note about typical use with taken

take_XYZABCDEFGHIJK (and all its variants):

  • condense
  • changes the names from "take" - > "img"
  • ellipticity misspelled
    threshold
  • remove?

Algorithms:

Hologram:

  • add defaults to
  • add sizes to attribute descriptions
  • fix ":mod" for numpy
  • "N" -> something else ("n_iter"?)
  • References start at 5?
  • stats within stats?

Hologram.calculate_padding:

  • revisit "ij" and "xy" notation
  • What happens if square padding is false?
  • Add a line about default behavior
  • change name to "calculate_padded_shape"

Hologram.optimize:

  • clean up reference numbering

FeedbackHologram:

  • ensure consistency with ij/nm indexing
  • output -> out
  • better name for "correct_image", add "NotImplemented" flag

SpotHologram:

  • "correct_image" name - not precise (maybe "refine_offset"), combine with "correct_spots"?

Hardware

SLM:

  • "measured_amplitude" has typos
  • method "phase_wrap" extraneous comma
  • "set_analytic_amplitude": change name to "determine", also use normalized units
  • "wav_norm" to "phase_scaling"; also update docstring to reflect inclusion in superclass
  • in "write", remove "flatmap" flag (combine with "phasecorrect"
  • debug slm.info()

Cameras:

  • make "view_continuous" private

Camera:

  • "window", "roi" -> "woi"? for all cameras
  • "transform" make name uniform w/ above (It's fine.)
  • autoexposure: averages -> average_count
  • is autoexposure z in um or normalized wavelengths?

CameraSLMs:

  • change ijcam, kxyslm based on convention decided above
  • calc_spot_size could use some work
  • process_wavefront_calibration: r2_thresh -> r2_threshold
  • wavefront_calibrate: fix references formatting
    • exclude_superpixels: say that nx and ny are superpixels cropped from each side globally

Misc

  • fitfunctions
    • hyperbola z0 -> z_0

Blazing orientation + wavefront correction

Hello,

I have a one question and a possible bug:

  1. This example in the docs uses a blazing vector of vector = (.002, .002). The spot is displaced in the lower left corner. I'm a bit confused about this: What's the coordinate axes orientation of the kxy grid, i.e. which corner is (+, +)? In particular, if I use e.g. blaze_vector = (.002, .002) I get a spot in the lower right corner.

  2. I think the exclude_superpixels flag does not work as intended. Changing to

# Line 1096 in meth: FourierSLM.wavefront_calibrate()

            if nx < exclude_superpixels[0]:
                continue
            if nx > (NX - exclude_superpixels[0]):
                continue
            if ny < exclude_superpixels[1]:
                continue
            if ny > (NY - exclude_superpixels[1]):
                continue

should do the trick.

  1. I'm asking this because my wavefront calibration (see image below) contains rather steep phase gradients. This means that when blazing the corresponding superpixels, the spot is not at its expected location but displaced. While this in itself is not a problem, I would have expected the gradients (per superpixel) to be steep at regions away from the interference point as the optical path length between the (+, +) SLM corner and the (+, +) camera corner should be smallest. Also, when recording an image with the wavefront correction mask applied, the image quality rapidly decreases away from the interference point. I am sure this is a problem with our setup and not with the algorithm, but wanted to ask if you have experienced similar issues.

wavefront_calibration_pixels
Image_GitHub

I still suspect there is some issue with rotations / flips, but I cannot seem to pinpoint it.

`self.phase_ff` in `WGS-Kim` + `stats_group=["computational_spot"]`

Hey,

WGS Kim

When using method="WGS-Kim", we have the following code snippet:

# line 658 in slmsuite.holography.algorithms
if (
         "fixed_phase" in self.flags
          and self.flags["fixed_phase"]
          and self.phase_ff is not None
):

I wonder about self.phase_ff: If I initialize a SpotHologram class via e.g. make_rectangular_array(), we have .phase_ff = None.
If I then run .optimize, self.phase_ff only gets set at the end of the loop.

Put didfferently, the code snippet I posted above always evaluates to False because self.phase_ff = None. Hence, the far-field phase gets never fixed and I essentially run WGS-Leonardo

Surely I am missing something or doing something wrong as WGS-Kim seems to work fine in the example docs.

stats_groups=["computational_spot"]

I feel that the efficiency when using the above stat group is not calculated properly for high-resolution images.
What happens is that we compare the the total power in an image total = np.sum(ns.square(self.amplitude_ff))
with the power in single pixels, e.g. self.amplitude_ff[self.spot_knm_rounded[:, 0], self.spot_knm_rounded[:, 1].

My fix is

# In meth: SpotHologram._calculate_stats_spots():

  if "computational_spot" in stat_groups:
            total = cp.sum(cp.square(self.amp_ff))

            psf_knm = 2 * int(toolbox.smallest_distance(self.spot_knm_rounded) / 2) + 1
            if psf_knm < 10:
                logging.warning("In meth: _calculate_stats_spots: ROI around simulated spots is clipped to 10.")
                psf_knm = 10
            if psf_knm > 50:
                logging.warning("In meth: _calculate_stats_spots: ROI around simulated spots is clipped to 50")
                psf_knm = 50

            feedback = analysis.take(
                np.square(self.amp_ff), self.spot_knm_rounded, psf_knm, centered=True, integrate=True
            )
            feedback = np.sqrt(np.array(feedback, dtype=self.dtype))

            stats["computational_spot"] = self._calculate_stats(
                feedback,
                self.spot_amp,
                efficiency_compensation=False,
                total=total,
            )

This surely is slower but will calculate the efficiency properly.
Similarly, one can update the feedback for SpotHologram classes, e.g. via

# in meth: SpotHologram._update_weights():

        elif feedback == "computational_spot":
            psf_knm = 2 * int(toolbox.smallest_distance(self.spot_knm_rounded) / 2) + 1
            if psf_knm < 10:
                logging.warning("In meth: _calculate_stats_spots: ROI around simulated spots is clipped to 10.")
                psf_knm = 10
            if psf_knm > 50:
                logging.warning("In meth: _calculate_stats_spots: ROI around simulated spots is clipped to 50")
                psf_knm = 50

            logging.info("Use updated routine to calculate weights for Spot Holograms")
            feedback = analysis.take(
                np.square(self.amp_ff), self.spot_knm_rounded, psf_knm, centered=True, integrate=True
            )
            feedback = np.sqrt(np.array(feedback, dtype=self.dtype))
            spot_amp_norm = self.spot_amp * 1 / Hologram._norm(self.spot_amp)

            self.weights[self.spot_knm_rounded[1, :], self.spot_knm_rounded[0, :]] = (
                self._update_weights_generic(
                    self.weights[self.spot_knm_rounded[1, :], self.spot_knm_rounded[0, :]],
                    cp.array(feedback),
                    spot_amp_norm,
                )
            )

Thanks for looking into this!

Add subpixel `SpotHologram` capability

Currently, we pad holograms (see first tip in Hologram) to increase the resolution of spot patterns. This is definitely needed to get to resolutions 2-4x the default without padding. However, to reach resolutions beyond that, we could instead target (diffraction-limited) gaussians shifted by subpixel distances instead of the current default which targets single pixels. This approach will enable better resolution without consuming excessive memory.

Fix autodoc problems

Side navigation tree doesn't expand to show all pages unless that page is visited via links from main panel.

Unify Erroring

  • Remove assert in cases where it would be better to raise a ValueError, etc.
  • Remove filenames in raise Error("file.py: Error"), as the filename should be reported in the stack trace already.
  • Other format unification.
  • Document error paths fully in docstrings.

Some corrections for MRAF in the package

Discussed in #64

Originally posted by Shuyun1247 October 27, 2023
Hello,

I've been using your package, and it's been working great for me. However, I encountered some issues when using the MRAF method in your code, specifically when setting the signal region be a rectangular "null_region". I've identified some areas in algorithm.py that, upon modification, resolve these issues. Here are the changes I suggest:

  1. Modification in ijcam_to_knmslm function:
    In algorithms.py, find the definition of the function ijcam_to_knmslm. Before the affine transformation, add the line b = b.reshape(-1) to ensure the correct shape for b.

    Original:

    cp_img = cp.array(img, dtype=self.dtype)
    cp.abs(cp_img, out=cp_img)
    
    # Perform affine.
    ...

    Modified:

    cp_img = cp.array(img, dtype=self.dtype)
    cp.abs(cp_img, out=cp_img)
    
    b = b.reshape(-1)
    
    # Perform affine.
    ...
  2. Modification related to null_knm and null_region_knm:
    Find the following section in algorithms.py and modify it as follows to account for both null_knm and null_region_knm.

Original:

if self.null_knm is None:
 ...

Modified:

if self.null_knm is None and self.null_region_knm is None:
 ...
...

...

I'm not entirely sure if I was using the SpotHologram incorrectly, but after making these corrections, the method worked properly for both "null_vector" and "null_region" for us. I hope these suggestions are helpful. Let me know if you need any more details or if there are any concerns.

Thank you!

Tests

Write testing coverage.

  • for computational things that don't require hardware can just write code

  • for analysis type things that don't require hardware, can have images / data that get pulled from the cloud

  • for things that require hardware, would be nice to have a dedicated testing setup.

  • test should be triggered on a PR. for hardware, have to make sure it's not getting ddos'd

DataRay Camera support

Hi,

We really like you package. But we are using a lot of DataRay cameras in the lab. So I'm wondering if it's possible that you can support it in the future

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.