The way things are implemented now: a "footprint" in which the stars are found is used as the reference throughout the pipeline for referring back to the stars. (Every table carries a "footprint hash" key)
But this footprint does not include the filtering (which can be changed by user) on stars (magnitude range, max astrometric or photometric noise).
Possible solutions:
include the limits in the hash representing the "footprint" (but risk of collision?)
find an all new way of referring back to stars, even though everything now is built around querying by the hash of the footprint
First steps:
given that the calculation of the hash of the footprint turned out to be the same throughout the pipeline, refactor it to see more clearly
STARRED's smart_guess function for PSF building sets the FWHM of the moffat function to 3 pixels. This is too small for a lot of wide field images, which are oversampled (typical pixel size: 0.2'', typical seeing: 1-1.5'')
make it more flexible in STARRED
create new STARRED release and push to pypi
Here:
in new branch (create):
modify requirements of this code base for newer version
run the pipeline all the way to the completion of PSF estimation
set redo_psf to true
change the stars the PSF is allowed to use
redo the PSF
Now the photometry selects two PSFs per frame, duplicating the frame. (-> two fluxes per frame)
Possibilities for fixing this:
Enforce only one PSF per footprint hash per frame?
Create new key for the PSF? (containing info on which stars are allowed to be used)
just modify the deconvolution/photometry preparation query to select only one PSF? Which one then? Lowest chi-squared? Makes things hard to debug for user.