Giter VIP home page Giter VIP logo

Comments (12)

ichristen avatar ichristen commented on June 12, 2024

Hi @LeoKonzett ,

Thanks for this feedback, it's very useful to see what people are doing and what issues are encountered.

  • _update_weights() for spot_amp: Yeah, I think this part was changed back and forth and the current documentation (and code) maybe doesn't reflect the intended behavior. I'm adding the documentation tag to this issue. Also, our tests have been using spot_amp=None, which I think is an oversight. The intended behavior is:

    • The user feeds something to spot_amp= with some random units (not normalized).
    • Despite what the documentation claims (that spot_amp is automatically normalized), we want to keep the same units such that the user can easily modify this (still always numpy) list if desired and then have the optimization change towards these values.
    • You're right, that Hologram.update_weights_generic() is fed an unnormalized target, but the end of this function normalizes the weights (see here), so I think it still works out.
  • Optimization divergence: We have noticed in some cases that image holograms (FeedbackHologram) diverge at certain points under some power values or levels of feedback. We're in the process of improving this. However, we have never found a similar issue for SpotHologram in our use-cases with the current code. SpotHologram has always been robust in our tests. I suggest the following:

    • Is your settle_time_s long enough for your SLM? If the SLM is not allowed enough time to settle, then you'll be measuring the previous result for the phase pattern from the previous iteration. This can lead to unstable optimization. I think the current default is settle_time_s=0 on main, but maybe we should restore the settle_time_s=0.3 default which we previously used as good for most SLMs. The user should determine the settle_time_s based on their application. Higher precision holograms require longer time (i.e. longer than 1/e). There should be more documentation regarding this, so more reason to add a documentation tag.
    • What camera are you using? It's possible that .flush() (which is intended to avoid old frames and thus outdated feedback) is not working as desired for your camera.
    • If the above doesn't work, could you send us the code snippet that generates the spot pattern of interest? I could imagine certain patterns which are "hard" for WGS to generate, and it might be useful for us to debug. I'm adding the bug tag in lieu of this.

Thanks again for this feedback!

from slmsuite.

ichristen avatar ichristen commented on June 12, 2024

Actually, you're right that the lack of spot_amp normalization is a bug. Even though the current code fine for the Leonardo feedback (if target is off by a constant c, then the result is off by a factor of c^p which goes away with normalization), the Nogrette feedback will yield incorrect results. We will fix this by forcing normalization onto target_amp locally in Hologram.update_weights_generic(). ETA for a patch next week.

from slmsuite.

LeoKonzett avatar LeoKonzett commented on June 12, 2024

Hi @ichristen

thanks for the quick response. I checked both settle_time_s = 0.4 and .flush() and both are working properly (tested with e.g. writing a blazed grating to the SLM with settle=True and then capturing a frame).

The pattern I'm generating is similar to the one from the docs webpage.

# Initialize FourierSLM as `fs` as in docs webpage.
# fs.cam.shape = (800, 800)

xlist = np.arange(250, min(fs.cam.shape) - 300, 90)  
ylist = np.arange(250, min(fs.cam.shape) - 300, 90)
xgrid, ygrid = np.meshgrid(xlist, ylist)

square = np.vstack((xgrid.ravel(), ygrid.ravel()))  
from slmsuite.holography.algorithms import SpotHologram
hologram = SpotHologram(shape=(2048, 2048), spot_vectors=square, basis='ij', cameraslm=fs)

# Optimization
comp_kwargs = {"power": 0.7}
hologram.optimize('WGS-Kim', feedback='computational', stat_groups=['computational'], maxiter=30, **comp_kwargs)

Upon recording with the camera, we get a 3x3 array as intended. Checking the resulting img with subimages = analysis.take(img, vectors=square, size=40, integrate=False) also yields good results.

# Optimization w. cam feedback
exp_kwargs = {"power": 0.2, "factor": 0.5} # Factor is for Nogrette

hologram.optimize('WGS-Kim', maxiter=5, feedback='experimental_spot', stat_groups=['computational', 'experimental_spot'], **exp_kwargs)

from slmsuite.

LeoKonzett avatar LeoKonzett commented on June 12, 2024

Hi @ichristen,

I managed to spot the issue, I was using the wrong coordinate convention. When e.g. blazing to a spot (100, 700), I was blazing to row_idx=100, column_idx=700 instead of row_idx=700, col_idx=100.
Maybe this could be added to the docs (there you blaze to (600, 600), currently it's not obvious that the convention is Cartesian.

Thanks!

from slmsuite.

ichristen avatar ichristen commented on June 12, 2024

So the experimental holography is converging now? I'm not quite sure how coordinate convention would mess with convergence as the convergence is dealt with internally with uniform convention.

Here's my TODO list for this issue's thread:

  • Add more clarity about (numpy) shape (y,x)=(h,w) vs vector (x,y) formalism in both docs and examples.
  • Normalize spot_amp in Hologram.update_weights_generic() and update associated documentation (decided to implement this partially).
  • Figure out what do to with settle_time_s.
  • I'm also going to add a Hologram flag which will tell stats to save the feedback images/etc with the option to export as an .h5 file for user reference and debugging. That way, people can send us the feedback images so we can check if WGS is doing something stupid.

from slmsuite.

LeoKonzett avatar LeoKonzett commented on June 12, 2024

Hi @ichristen,

yes the holography is converging.
To be more precise: The camera we're using returns a transposed frame. I thought the Fourier calibration accounts for that but it's reasonable that it doesn't. My fix is including a transpose=True keyword in the Camera() (Camera.py) constructor, similar to flip_ur=True or flip_lr=True and adding that to analysis.make_transform()

Secondly, there might be an issue with SpotHologram.make_rectangular_array().
The code snippet

slm_shape = (1200, 1920)  # (h, w) convention
array_holo = SpotHologram.make_rectangular_array(slm_shape, 
array_shape=[8, 4], array_pitch=[12, 12], basis='knm', cameraslm=fs)

zoom = array_holo.plot_farfield(source=array_holo.target, title='Initialized Nearfield')

doesn't generate a centered array. Implementing the change below fixes this behavior:

# In meth: SpotHologram.make_rectangular_array()

# Make the grid edges. This accounts for (h, w) -> (x, y) convention.
        x_edge = (np.arange(array_shape[0]) - (array_shape[0] - 1) / 2)
        x_edge = x_edge * array_pitch[0] + array_center[1]  # changed array_center[0]
        y_edge = (np.arange(array_shape[1]) - (array_shape[1] - 1) / 2)
        y_edge = y_edge * array_pitch[1] + array_center[0] # changed array_center[1]

from slmsuite.

ichristen avatar ichristen commented on June 12, 2024

Hi @LeoKonzett,

Thank you for catching the make_rectangular_array centering issue. Recently, the default centering logic changed and it looks like I didn't test non-square shapes. The issue is actually on line 1942, and is fixed in e572c47. The grid edge logic is correct because array_center in other units is not flipped.

I'm still a bit confused about the transpose issue. Fourier calibration should have caught that because a transpose is just a 90 deg rotation plus a flip (so I think the current arguments of make_transform cover transpose). In testing, I would take the eight cases and make sure Fourier calibration worked on each. The exposure of the camera, the choice of pitch, or noise of the camera might influence how well the "two spots removed" parity check is working, and it just so happens that it works better when the camera is transposed (i.e. something is failing, but the default option happens to be the transposed option so it all works out). If you send the image used for Fourier calibration, I can test the eight cases on that image and make sure things are working robustly.

from slmsuite.

LeoKonzett avatar LeoKonzett commented on June 12, 2024

Hi @ichristen,

the routine below works for flip_ud=True and rot="90".

fs_optimize_kwargs = {"method": "WGS-Kim", "maxiter": 30, "feedback":'computational', "stat_groups":['computational']}

fs.fourier_calibrate(array_shape=[20, 20], array_pitch=[12, 8], array_center=None, plot=True, 
                     autofocus=False, autoexposure=False, **fs_optimize_kwargs)

I attach the image I use for calibration below. I slightly adapted analysis.blob_detect() to do dynamic thresholding (see snippet below) so you might have to change the thresholds for the CV2 blob detector to get similar results.

# in analysis.blob_detect() - set CV2 blob detector thresholds dynamically

from skimage.feature import peak_local_max  # import this 

peak_coords = peak_local_max(cv2img, min_distance=15, num_peaks=8)
y_idx = peak_coords[:, 0]
x_idx = peak_coords[:, 1]
heights = cv2img[y_idx, x_idx]

# Set parameters of CV2 simple blob detector dynamically
params.minThreshold = np.min(height_peaks) - 10  # 10 is padding but can be changed
logging.info(f"Lower threshold for CV2 blob detector is {params.minThreshold}")
params.maxThreshold = np.sort(height_peaks)[-2] + 10 
logging.info(f"Upper threshold for CV2 blob detector is {params.maxThreshold}")

This is not optimized for efficiency.

To test the calibration, I blaze with this code snippet

vector_blaze = fs.ijcam_to_kxyslm((100, 700))  # (100, 700) is supposed to be (x, y) convention

blaze_phase = toolbox.blaze(grid=fs.slm, vector=vector_blaze)

plot_phase(blaze_phase, title="Blaze at pixel (100, 100)", zoom=False)  # plot_phase func. from https://slmsuite.readthedocs.io
/en/latest/_examples/experimental_holography.html

As visible below, the points gets blazed to (100, 700), which is the intended behavior.

image

img_for_calibration

One question that I have:
In the target image, the two spots for orientation_check are missing in the bottom right corner. But in the image above the two spots are missing from the top left corner. Is this intended?

from slmsuite.

ichristen avatar ichristen commented on June 12, 2024

Hi @LeoKonzett,

Sorry for my late reply.

  • I slightly adapted analysis.blob_detect() to do dynamic thresholding (see snippet below) so you might have to change the thresholds for the CV2 blob detector to get similar results.
    I like the idea of improvements to .blob_detect(); that part of the code hasn't seen much attention in the revamp. Feel free to improve this function and submit a pull request! However, I would prefer to not add dependencies to main (such as skimage) unless absolutely necessary, as increasing numbers of dependencies increases bloat and risk of associated dependency errors. Actually, I wanted at one point to remove cv2 (opencv-python) as a dependency before being convinced otherwise. Consider instead looking for similar features in existing dependencies or writing something simple and custom if functionality is desired.

  • In the target image, the two spots for orientation_check are missing in the bottom right corner. But in the image above the two spots are missing from the top left corner. Is this intended?
    Yes. The spots are deleted from the bottom-right corner in the $k$-space of the SLM, but your camera seems to have a few flips and rotations that causes the empty spots to appear in the top-left. This is part of the reason to have Fourier calibration: so the user doesn't have to care or think about these flips and rotations.
    Three more things:

    • We tend to get more reliable results when we overexpose (saturate) the Fourier calibration image to reduce the influence of the 0th order peak on calibration. However, I'm going to poke at your image in a few days to see if I can get it to run more reliably in the general unsaturated case.
    • Your spots look a bit distorted. Have you run wavefront calibration?
    • Your image has an interesting checkerboard pattern at the pixel level. Is this intentional?

from slmsuite.

LeoKonzett avatar LeoKonzett commented on June 12, 2024

Hi @ichristen

thanks for this answer.

I am coming back to what I've written above, that is an issue with the ij basis.

According to this documentation the ij basis is centered at cam.shape/2.
A shape object is always in format height, width. That means that i is the height index and j is the width index. Is that correct?

I then looked at this example in the docs, which uses the following snippet

xgrid, ygrid = np.meshgrid(xlist, xlist)
square = np.vstack((xgrid.ravel(), ygrid.ravel()))                  # Make an array of points in a grid

plt.scatter(square[0,:], square[1,:])                               # Plot the points
plt.xlim([0, fs.cam.shape[1]]); plt.ylim([fs.cam.shape[0], 0])

I am a bit confused about the convention plt.scatter(square[0,:], square[1,:]). If I test this below for a single point

point = (10, 5)
test_arr = np.zeros((20, 20))

plt.imshow(test_arr)
plt.scatter(point[0], point[1])

I do get a point located at x=10, y=5. Wouldn't this imply that the convention is (x, y) and not (h, w)?

I am asking this because if I test my Fourier calibration, and blaze to e.g. cam_point=(200, 600), I do get an image with a point centered at (x=200, y=600) which is Cartesian and apparently not what's intended by the convention.

Screenshot 2023-03-02 at 11-29-11 Wavefront_Calibration - Jupyter Notebook

I have also spotted a small bug:

# line 1987 in algorithms.py
# array_center = toolbox.convert_blaze_vector(
                #     (0, 0), "kxy", "ij", cameraslm.slm
                # )

array_center = cameraslm.kxyslm_to_ijcam((0, 0))
array_center = np.rint(array_center).astype(np.uint)

This change should to the trick.
Another thing I note is that calling cameraslm.ijcam_to_kxyslm yields an output vector in (x, y) format. This implies that an input vector in (h, w) format gets changed to an output vector in (x, y) format.

Another thing I noted is that for small array pitches, calling

hologram_exp = SpotHologram.make_rectangular_array(slm.shape,
    array_shape=[25, 25], array_pitch=[20, 20], basis='ij', cameraslm=fs, orientation_check=False)

hologram_exp.plot_farfield(source=hologram_exp.target, title='Initialized Nearfield')

does not yield a perfect grid but rather
Screenshot 2023-03-02 at 13-59-57 Wavefront_Calibration - Jupyter Notebook
I guess this is because my Fourier calibration is a bit wobbly.

Thanks for looking into this!

from slmsuite.

ichristen avatar ichristen commented on June 12, 2024

(x, y) vs (y, x).

The first docs reference you linked is unclear. I updated these docs for clarity (ea31247). This is what's actually happening:

  • Any array / image shape follows the numpy (h, w) convention.
  • Any vector or position / etc uses (x, y) convention.

This is the formalism that numpy, scipy, matplotlib, etc take. The shape and indexing of an array is always inverted, but other functions such as numpy.meshgrid(x, y) (default), scipy.odr.Data(x, y), or matplotlib.pyplot.scatter(x, y) use standard cartesian (x, y) that is more familiar to users. Yes, it's not ideal, but this is the formalism generally adopted by the community.

Gridpoint rounding.

The spots are rounding to the nearest "knm" gridpoint. This is expected behavior. The computational $k$-space cannot have infinite resolution, as this would cost infinite memory (and take a while to Fourier transform). This is why you might want to use padding (calculate_padded_shape()) to effectively enhance the resolution of the SLM farfield. See the first tip in the constructor of hologram to learn more about padding. The Fourier calibration pads by default.

Another important note: the plotted image is the target nearfield of the camera and the target farfield of the SLM. It isn't the nearfield of the SLM.

Hope this helps!

from slmsuite.

ichristen avatar ichristen commented on June 12, 2024

Hi Leo,

The TODO list from Feb 6th was merged in with dev (#35), so I'm going to close this issue. Thanks again for raising this!

from slmsuite.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.