Giter VIP home page Giter VIP logo

caustics's People

Contributors

adam-coogan avatar alexandreadam avatar andreasfilipp avatar connorstoneastro avatar dependabot[bot] avatar gabrielmissael avatar lsetiawan avatar mjyb16 avatar pre-commit-ci[bot] avatar uwcdc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

caustics's Issues

torchinterp1d and KeOps are incompatible with vmap

vmap is not compatible with torchinterp1d or keops. Both use torch.autograd.Function, which is not compatible with vmap by default. This will be a problem if you want to e.g. vmap over a bunch of NFWs for simulating or computing magnifications, since they may use torchinterp1d via FlatLambdaCDM. This is fixable, but requires modifying torchinterp1d and keops. I've opened issues in both projects' repos.

This isn't really our responsibility to fix, but I wanted to document this since it's good to be aware of this restriction.

Semantic lists

We want to be able to input a list of length len(dynamic_modules) in the simulator, where dynamic_modules would be the number of modules with dynamic parameters in the simulator.

Nomenclature change `sources` -> `light`

I think we should change the sources nomenclature to light as in a "sersic light model" just like how one discusses as "SIE lens model". The light models can be used as sources, but also as the lens light, or even just an interloper. In principle the "source" models can represent anything which generates light.

We should also add a new directory for PSF models since we currently have no internal treatment for PSF.

Parametrized graph missing recursive parameters

I noticed while constructing a simulator that parametrized seems to miss some recursive parameters when showing the graph. Here I have a simulator with a lens, lenslight, and source model. The lenslight and source models are include in the graph, but the lens is only included at the first level:

Screenshot from 2023-10-24 15-17-24

Similarly, printing misses the parameters:

Screenshot from 2023-10-24 15-20-40

However I think the core functionality still works since the simulator could run without issue and ploting the model looked good. So this must be an issue of the graph print/plot routines getting out of date.

Truncated NFW

A truncated NFW profile is more suitable for subhalo simulations since the regular NFW is divergent in total mass. Having finite mass makes it possible to simulate fields without invalid boundary conditions.

Parametrized

need to look into gradients of simulator

Need to refactor kwargs in parametrized functions

Kappa grid complex dtype

We should not change the dtype of complex variable so freely, it can break the module. Right now, the .to method looks like this

    def to(
        self, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None
    ):
        super().to(device, dtype)
        self.Psi_kernel = self.Psi_kernel.to(device=device, dtype=dtype)
        self.ax_kernel = self.ax_kernel.to(device=device, dtype=dtype)
        self.ay_kernel = self.ay_kernel.to(device=device, dtype=dtype)
        if self.Psi_kernel_tilde is not None:
            self.Psi_kernel_tilde = self.Psi_kernel_tilde.to(device=device, dtype=dtype) # Dont do this
        if self.ax_kernel_tilde is not None:
            self.ax_kernel_tilde = self.ax_kernel_tilde.to(device=device, dtype=dtype)
        if self.ay_kernel_tilde is not None:
            self.ay_kernel_tilde = self.ay_kernel_tilde.to(device=device, dtype=dtype)

Instead, we should not allow the user to change the dtype of those specific variable, like this

    def to(
        self, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None
    ):
        super().to(device, dtype)
        self.Psi_kernel = self.Psi_kernel.to(device=device, dtype=dtype)
        self.ax_kernel = self.ax_kernel.to(device=device, dtype=dtype)
        self.ay_kernel = self.ay_kernel.to(device=device, dtype=dtype)
        if self.Psi_kernel_tilde is not None:
            self.Psi_kernel_tilde = self.Psi_kernel_tilde.to(device=device)
        if self.ax_kernel_tilde is not None:
            self.ax_kernel_tilde = self.ax_kernel_tilde.to(device=device)
        if self.ay_kernel_tilde is not None:
            self.ay_kernel_tilde = self.ay_kernel_tilde.to(device=device)

Units in docs

All our functions should specify the units for inputs and outputs. It can be very frustrating trying to do anything exact without knowing the expected units for a function. In some cases the units are arbitrary and will just be passed along as given, in that case we can explicitly state that so at least it is clear.

Implement .to()

Per discussions with @ConnorStoneAstro, for consistency with torch.nn.Module, Base.to() should take optional device and dtype args, where dtype is a floating-point type. The dtype arg should be used to change the type of any member variables.

We need to implement this for all subclasses.

SIE not valid at q=1

SIE returns nan when evaluated with q=1. Numerically unstable due to 1/(1-q^2) term, which analytically should cancel. @AlexandreAdam has taylor expansion which solves this issue near q=1, just need a where statement to flip between them

def approximate_deflection_angles(self, r_ein, q, phi, x0, y0):
        b, s = self._param_conv(q, r_ein)
        # rotate to major/minor axis coordinates
        theta1, theta2 = self.rotated_and_shifted_coords(x0, y0, phi)
        psi = torch.sqrt(q ** 2 * (s ** 2 + theta1 ** 2) + theta2 ** 2)
        alpha1 = b * theta1 / (psi + s)
        alpha2 = b * theta2 / (psi + q**2 * s)
        # # rotate back to original orientation of coordinate system
        alpha1, alpha2 = self._rotate(alpha1, alpha2, -phi)
        return torch.concat([alpha1, alpha2], dim=1)  # stack alphas into tensor of shape [batch_size, 2, pix, pix]

    def analytical_deflection_angles(self, r_ein, q, phi, x0, y0):
        b, s = self._param_conv(q, r_ein)
        # rotate to major/minor axis coordinates
        theta1, theta2 = self.rotated_and_shifted_coords(x0, y0, phi)
        psi = torch.sqrt(q ** 2 * (s ** 2 + theta1 ** 2) + theta2 ** 2)
        alpha1 = b / np.sqrt(1. - q ** 2) * torch.atan(np.sqrt(1. - q ** 2) * theta1 / (psi + s))
        alpha2 = b / np.sqrt(1. - q ** 2) * torch.atanh(np.sqrt(1. - q ** 2) * theta2 / (psi + s * q ** 2))
        # # rotate back
        alpha1, alpha2 = self._rotate(alpha1, alpha2, -phi)
        return torch.concat([alpha1, alpha2], dim=1)

    def _param_conv(self, q, r_ein):
        r_ein_conv = 2. * q * r_ein / np.sqrt(1. + q ** 2)
        b = r_ein_conv * np.sqrt((1 + q ** 2) / 2)
        s = self.s_scale * np.sqrt((1 + q ** 2) / (2 * q ** 2))
        return b, s

Multiplane lensing only agrees with lenstronomy to 1e-2

When running multiplane lensing on two SIEs aligned in two separate planes there is a difference in the caustic results and lenstronomy results of order 2e-3. This can be reproduced by running the multiplane_test.py code in the test folder.

When running a single SIE plane the results agree to high precision:
multiplanetest_singleplane

When adding a second plane there must be something magnifying the errors because suddenly the difference goes up to 2e-3:
multiplanetest

There is also clear structure in the errors. Note that these images are created just by taking the log of the abs difference between the two alpha maps (in the x-axis).

For this issue we need to determine the source of these differences. Possible explanations could be: problem with our cosmology calculator, some unit conversion is different between packages, lenstronomy sub-pixel integration, numerical precision (float32 for pytorch vs float64 for lenstronomy), numerical stability when evaluating SIE is different between packages, something else?

Softening prescription for numerical stability might be wrong

To avoid dividing by very small numbers, a lot of methods in the lens classes do something like this:

th = (x ** 2 + y ** 2).sqrt() + self.s

where self.s is a softening parameter, with typical default of 0.001. This is different than lenstronomy, which does this instead:

th = np.maximum(th, self.s)

Our current approach shifts all radial coordinates by self.s, not just coordinates close to the lens center which would lead to numerical instabilities. This biases our implementation with respect to lenstronomy by a global rescaling of our coordinates.

I suggest that we use torch.maximum(th, self.s) as a safeguard against very small radii.

Pixelated issue with shape

Shape is optional according to the args and docstring, but letting shape take on its default value leads to the second error message being triggered:

File ~/envs/supermage_keops/lib/python3.9/site-packages/caustic/sources/pixelated.py:53, in Pixelated.__init__(self, image, x0, y0, pixelscale, shape, name)
     49 if image is not None and image.ndim not in [2, 3]:
     50     raise ValueError(
     51         f"image must be 2D or 3D (channels first). Received a {image.ndim}D tensor)"
     52     )
---> 53 elif shape is None and len(shape) not in [2, 3]:
     54     raise ValueError(
     55         f"shape must be specify 2D or 3D tensors. Received shape={shape}"
     56     )
     57 super().__init__(name=name)

TypeError: object of type 'NoneType' has no len()

Implement ExternalShear

Fill in src/caustic/lenses/external_shear.py and add tests to test/test_external_shear.py on the mvp-dev branch.

Pixel map more intuitive scale parameters

Currently pixel map objects require a "scale" parameter, though it should be represented in more standard metrics. Pixel scale, distance, and other more physical representation

Inconsistent Unit System and Model Orientation in Relation to Real-World Coordinates

We've noticed that the unit system and orientation of some models in the package are not consistently defined with respect to the real-world coordinate system. This lack of standardization is causing confusion and potential inaccuracies in usage.

Expected Behavior:
The unit system and orientation of all models should be consistent and accurately reflect the real-world coordinate system. This would allow users of the package to accurately model and analyze their data without the risk of inconsistencies or inaccuracies.

Actual Behavior:
The unit system and model orientation are inconsistently applied across different models in the package. This leads to confusion, potential inaccuracies, and makes the package less intuitive to use.

Potential Solution:
It would be beneficial to define a standard for the unit system (based on fits WCS) and model orientation in relation to the real-world coordinate system, and ensure this standard is applied consistently across all models in the package. It would also be helpful to provide clear documentation on this standard, to guide users in their utilization of the package.

Cannot vmap through Simulator with Pixelated source

Description of the issue

The problem is due to how caustic.utils.interp2d is written. When trying to vmap a simulator with a pixelated source, I get the following torch internal error

RuntimeError: vmap: We do not support batching operators that can support dynamic shape. Attempting to batch over indexing with a boolean mask.

In our code, this error occurs specifically at line 279 of caustic.utils

if padding_mode == "zeros":  # else padding_mode == "extrapolate"
    result[idxs_out_of_bounds] = torch.zeros_like(result[idxs_out_of_bounds])

This can easily be replicated with the following code

from caustic import Simulator, EPL, FlatLambdaCDM, Pixelated
from caustic.utils import get_meshgrid
from torch import vmap # torch 2.0

n_pix = 20
class Sim(Simulator):
    def __init__(self, name="test"):
        super().__init__(name)
        pixel_scale = 0.04
        fov = n_pix * pixel_scale
        z_l = 0.5
        self.z_s = torch.tensor(1.0)
        self.cosmo = FlatLambdaCDM(h0=0.7 if cosmo_static else None, name="cosmo")
        self.epl = EPL(self.cosmo, z_l=z_l, name="lens")
        self.source = Pixelated(x0=0., y0=0., pixelscale=pixel_scale/2, image_shape=[n_pix, n_pix])
        self.thx, self.thy = get_meshgrid(pixel_scale, n_pix, n_pix)
        self.n_pix = n_pix

    def forward(self, params):
        alphax, alphay = self.epl.reduced_deflection_angle(x=self.thx, y=self.thy, z_s=self.z_s, params=params) 
        bx = self.thx - alphax
        by = self.thy - alphay
        return self.source.brightness(bx, by, params)

# default cosmo params
h0 = torch.tensor([0.68, 0.75])

# default lens params 
x0 = torch.tensor([0, 0.1])
y0 = torch.tensor([0, 0.1])
q = torch.tensor([0.9, 0.8])
phi = torch.tensor([-0.56, 0.8])
b = torch.tensor([1.5, 1.2])
t = torch.tensor([1.2, 1.0])

source = [torch.randn([2, n_pix, n_pix])]

cosmo_params = [h0]
lens_params = [x0, y0, q, phi, b, t]
x = [cosmo_params, lens_params, source]
sim = Sim()
vmap(sim)(x)

Discussion

This behavior is probably related in some ways to this torch issue or this one.

This error was noticed while looking at Issue 34, and thus will probably be resolved at the same time.

This problem is a bit annoying, because it means it is going to be hard to simply vmap over the whole simulator if some functions have odd behaviors like this. I think the fix for now will be to vmap internally and put checks for tensor shape in the pack method to make sure batch dimension is always present. We can always use our Simulator wrapper to remove this batch dimension post-hoc for backward compatibility.

KappaGrid shape consistency

The deflection fields returned by this class should have shape equal to kappa_map_shape. Currently they are 4D tensors, since that's what interpolate_image returns.

Add caustic to PyPI

Users should be able to install caustic with pip install caustic. Admittedly, it's a bit early now to add caustic while features are still in development, but best to claim the name on the package index early.

Implement MultiplaneLens

See src/caustic/lenses/multiplane.py for an outline. Should also add test cases to test/test_multiplane.py.

NFW rtol value for test set to high value

The tolerance for the NFW profile comparison to lenstronomy is set to a relatively high value. The NFW psi, kappa, and alpha maps show a systematic offset to lenstronomy. -> double check cosmology, constants

Making the 'name' requirement optional

For an easy usage it might be easier to use if the 'name' requirement for mass and light profiles remains optional and they get automatically unique names.

Test case tolerances too tight

Test cases have tolerances set at very specific values. Should be reset to generic values which will not trip over small numerical fluctuations.

Parametrized module with a single parameter

The issue arises when a module has only one parameter, leading to a problem during the execution. For instance, if a user writes:

p = self.unpack(x)

In this case, p becomes a list, and any error that arises within caustic can be difficult to decrypt and trace back to this particular point. To address this, there are two potential solutions.

Firstly, the user could manually handle the situation by modifying the code to:

p, = self.unpack(x)

This modification ensures that p becomes the tensor itself instead of a list. Alternatively, within the codebase, we could implement a mechanism to detect the presence of a single parameter and automatically return the tensor itself, simplifying the process for the user and avoiding potential errors.

Implement lens equation solver

This is needed for determining the positions of images of point-like sources (e.g. quasars). I'm imagining something like this:

from typing import Union
from caustic.lenses.base import AbstractThinLens, AbstractThickLens

def forward_raytrace(
    thx0_src,
    thy0_src,
    lens: Union[AbstractThinLens, AbstractThickLens],
    z_l,
    z_s,
    cosmology,
    *args_lens,
    **kwargs_lens,
):
    def fn(thx, thy):
        """
        Function to minimize: distance between reverse ray-traced points and source's
        position.
        """
        bx, by = lens.raytrace(thx, thy, z_l, z_s, cosmology, *args_lens, **kwargs_lens)
        return (bx - thx0_src)**2 + (by - thy0_src)**2
    
    # TODO: implement. Could get guesses for image positions and fit from there.
    ...

It'd be good to write this as a standalone function first and then think about turning it into a method for the base lens classes.

For reference, Lenstronomy handles this in the LensEquationSolver class, which contains a generic solver and a faster one that works with EPL + shear models only. Ideally we'd implement both approaches.

Sort out KappaGrid issues

The plots compare the analytically-calculated potential and deflection fields for a pseudo-Jaffe with the ones computed using an FFT. It looks like the kernels are shifted by a half-pixel, and may need to be integrated over the central few pixels.

image

image

We can close this issue when the discrepancies are understood and fixed for this comparison (or a different lens mass profile with compact support).

Cosmological distance calculations between same redshifts of different dtype don't return zero

When calling a method from caustic.Cosmology to compute distances between two equal redshifts, the output can be nonzero if the input redshifts are tensors of different float precision. For example:

import torch
import caustic
from caustic.cosmology import FlatLambdaCDM

z1 = torch.tensor(0.5, dtype=torch.float64)
z2 = torch.tensor(0.5, dtype=torch.float32)
cosmo = FlatLambdaCDM()

cosmo.transverse_comoving_distance_z1z2(z1, z2)

Out[8]: tensor(0.0002, dtype=torch.float64)

Perhaps the distance methods in caustic.Cosmology should include a failsafe for such instances.

Implement realistic (sub)halo samplers

This requires implementing several things:

  • (Sub)halo mass functions
  • Mass-concentration relations
  • Position distributions
  • Warm/self-interacting DM versions of the above

My paper arXiv:2205.09126 and its references should be helpful here.

Batching throughout caustic

Need to solve batching for all of caustic. Under consideration we could use vmap, enforce batching always, or try to make everything shape agnostic. Its very hard to be shape agnostic for some functions though.

Finish (T)NFW implementation

I've implemented (but not tested) the untruncated NFW in src/caustic/lenses/nfw.py. Completing this requires implementing the truncated version as well and adding tests for both against lenstronomy (analogous to test/test_sie.py).

Reference: Baltz et al 2009.

Integrate lensing Jacobian FFT calculation

This issue is centered around _lensing_jacobian_fft_method. There are a few tasks:

  • Refactor into lenses.utils so the function can be used in ThickLens and ThinLens
  • Implement toggle between this method and the jacfwd + vmap one (similar to KappaGrid)
  • Add tests

Built-in source samplers

For prototyping it would be useful to have a caustic built-in system to sample reasonable source images. The github could store the PROBES sample and there could be a method in caustic to draw a random PROBES galaxy. This would download an HDF5 file (first time its called) with the PROBES sample, then randomly select one, and sample it at the desired resolution.

Add cosmology tests

Compare FlatLambdaCDMCosmology with what's in lenstronomy, using consistent parameter values (especially Omega_rad = 0).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.