Giter VIP home page Giter VIP logo

hstcal's Introduction

HSTCAL

Calibration software for HST/WFC3, HST/ACS, and HST/STIS.

Nightly regression test results are available only from within the STScI network at this time. https://plwishmaster.stsci.edu:8081/job/RT/job/hstcal/test_results_analyzer/

Install using Conda (Complete Calibration Environment)

HSTCAL can be obtained as part of the Space Telescope Environment (stenv) at https://stenv.readthedocs.io/en/latest/, a conda environment. The instructions found at this URL describe choosing Miniconda or Anaconda (if not already installed), choosing the stenv release, building the stenv environment from the YAML file, and activating your new environment. Once your environment is activated, you can invoke any of the HSTCAL executables.

# Use HSTCAL
$ calacs.e [...]
$ calwf3.e [...]
$ cs[...].e

Install using Conda (HSTCAL only)

The HSTCAL package is resident on conda-forge https://anaconda.org/conda-forge/hstcal, and you can use the following command to perform the installation.

$ conda install -c conda-forge hstcal==X.Y.Z

The X.Y.Z is the desired version number.

Source Installation

Note: This involves compilation of C code which requires that a C compiler and all dependent libraries be installed on your system prior to installing this package; specifically,

Instructions:

  1. Clone the package from github onto your local system using:

git clone https://github.com/spacetelescope/hstcal.git

  1. Compile the code using the instructions provided in INSTALL.md.

hstcal's People

Contributors

astropy-buildbot avatar hbushouse avatar jamienoss avatar jdavies-st avatar jhunkeler avatar mdlpstsci avatar philhodge avatar pllim avatar rendinam avatar saogaz avatar sean-lockwood avatar sosey avatar stsci-hack avatar stsci-sienkiew avatar stscirij avatar zacharyburnett avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hstcal's Issues

wf3cte possible inconsistent logic in simulating column readout

https://github.com/spacetelescope/hstcal/blob/master/pkg/wfc3/calwf3/wf3cte/wf3cte.c#L1470-L1502

                if ( (ttrap < cte->cte_len) || ( pix_1 >= cte->qlevq_data[w] - 1. ) ){
                    if (pixo[j] >= 0 ){
                        pix_1 = pixo[j] + fcarry; /*shuffle charge in*/
                        fcarry = pix_1 - floor(pix_1); /*carry the charge remainder*/
                        pix_1 = floor(pix_1); /*reset pixel*/
                    }

                    /*HAPPENS AFTER FIRST PASS*/
                    /*SHUFFLE CHARGE IN*/
                    if ( j> 0  ) {
                        if (pixf[j] < pixf[j-1])
                            ftrap *= (pixf[j] /  pixf[j-1]);
                    }

                    /*RELEASE THE CHARGE*/
                    padd_2=0.0;
                    if (ttrap <cte->cte_len){
                        ttrap += 1;
                        padd_2 = Pix(rprof->data,w,ttrap-1) *ftrap;
                    }

                    padd_3 = 0.0;
                    prem_3 = 0.0;
                    if ( pix_1 >= cte->qlevq_data[w]){
                        prem_3 =  cte->dpdew_data[w] / cte->n_par * pixf[j];  /*dpdew is 1 in file */
                        if (ttrap < cte->cte_len)
                            padd_3 = Pix(cprof->data,w,ttrap-1)*ftrap;
                        ttrap=0;
                        ftrap=prem_3;
                    }

                    pixo[j] += padd_2 + padd_3 - prem_3;
                } /*replaces trap continue*/

It appears that the outer conditional exists for efficiency and that the subsequent (inner) conditionals are there to differentiate which of the conditionals in the OR op were true. Since both ttrap and pix_1 are updated yet are subsequently conditionally checked this leads to inconsistent logic.

In addition, should https://github.com/spacetelescope/hstcal/blob/master/pkg/wfc3/calwf3/wf3cte/wf3cte.c#L1493

read

if ( pix_1 >= cte->qlevq_data[w] - 1.)

rather than

if ( pix_1 >= cte->qlevq_data[w] )

as it currently does?

Should the two outer conditions be assigned to temps and only then used?
master...jamienoss:issue48-wf3cte-inconsistent-logic-in-sim_colreadout_l

Furthermore, the use of ttrap as a conditional switch looks odd. It is initially assigned the value of cte->cte_len which is exactly what is is compared less than to, however, it (ttrap) can only ever be >= cte->cte_len or 0. The intent of its use would be clearer if the condition read if (ttrap == 0). For example, it makes me wonder if it is being used correctly or if there is room for a bug to exist.

Yet another, ttrap is also used in computing an index reference to cprof->data and rprof->data but is also updated, and thus it is unclear whether this variation was intended.

Use Projects?

GitHub's new "Projects" feature might be useful to manage the deliveries. Just a thought.

WFC3 CTE - lurking bug in population of pixel trap map

There seems to be something not quite right with the below code.

...
/*SCALE BY 1 UNLESS THE PCTETAB SAYS OTHERWISE, I IS THE PACKET NUM
      THIS IS A SAFETY LOOP INCASE NOT ALL THE COLUMNS ARE POPULATED
      IN THE REFERENCE FILE*/

for(i=0; i<RAZ_COLS;i++){
        ff_by_col[i][0]=1.;
        ff_by_col[i][1]=1.;
        ff_by_col[i][2]=1.;
        ff_by_col[i][3]=1.;
        j= cte->iz_data[i]; /*which column to scale*/
        ff_by_col[j][0]=cte->scale512[i];
        ff_by_col[j][1]=cte->scale1024[i];
        ff_by_col[j][2]=cte->scale1536[i];
        ff_by_col[j][3]=cte->scale2048[i];

        /*CALCULATE THE CTE CORRECTION FOR EVERY PIXEL
          Index is figured on the final size of the image
          not the current size. Moved above
          */

        for(j=0; j<RAZ_ROWS; j++){
            Pix(pixz_fff.sci.data,i,j)=hardset;
            ro = j/512.0; /*ro can be zero, it's an index*/
            if (ro <0 ) ro=0.;
            if (ro > 2.999) ro=2.999; /*only 4 quads, 0 to 3*/
            io = (int) floor(ro); /*force truncation towards 0 for pos numbers*/
            cte_j= (j+1) / 2048.0;
            cte_i= ff_by_col[i][io] + (ff_by_col[i][io+1] -ff_by_col[i][io]) * (ro-io);
            Pix(pixz_fff.sci.data,i,j) =  (cte_i*cte_j);
        }
    }
...

As you can see, ff_by_col is being populated with data from the various scaling tables: scale512, scale1024, scale1536, & 2048; but into column j= cte->iz_data[i]; /*which column to scale*/. In the loop below where ff_by_col is being referenced, it is only being referenced by i, the outer loop variable and not the previous j. The values gathered from the scaling tables aren't being used.

Firstly, ff_by_col only needs to be of dimension [4] and not [RAZ_COLS][4]. If I was to take a guess, these lines should perhaps read:

... j= cte->iz_data[i]; /*which column to scale*/
        ff_by_col[i][0]=cte->scale512[j];
        ff_by_col[i][1]=cte->scale1024[j];
        ff_by_col[i][2]=cte->scale1536[j];
        ff_by_col[i][3]=cte->scale2048[j];
...

In addition, if the initial initialization to 1 is to account for unpopulated values on file, an additional mechaism needs to be implemented as the suggested code will still only read that from the scaling tables, whatever that might be.

At present, the values on file are 1 anyhow so this is NOT currently a bug, however when these values are eventually changed in the reference tables they will be effectively ignored.

Add pointer register for easy book keeping of allocated memory

Since hstcal is not written in C++, error handling is cumbersome. Exiting at some point during the code, after various memory has been allocated, requires manually keeping track (book keeping) of already allocated memory so as to free said memory upon return. From what I have seen in calacs and calwf3, this manual booking isn't even being implemented. What I see is that normally only one allocated pointer being freed per return with little regard to anything else also previously allocated that is, perhaps, out of the developers 'focus'.

For the cte work I have implemented a simple register struct with functions to make this significantly easier for us to do.

NOTE: the existing PR doesn't not use this register, it simply adds it so that it can be used. This is effectively me splitting out the cte work into more logical commits for git hygiene.

New SINKCORR for CALACS

CALACS wants to do SINK pixel flagging like CALWF3 but might use a slightly different scheme of flagging.

Currently not sure if this should be done by Jamie as part of #13 (new PCTECORR) or in parallel by me (in case Jamie does not have time). I think Megan did both at the same time.

Implementation Details

Following details are extracted mostly from links below, and some from private communications:

  • New reference files with suffix *_snk.fits with DQ values 1024 for affected pixels. This file will be populated by CRDS into SNKCFILE FITS header keyword in the PRIMARY header of RAW image.
  • New flag named SINKCORR in the PRIMARY header of RAW image, stating whether to apply SNKCFILE or not.
  • Should only be applied if BIASCORR=COMPLETE.
  • Format: The file is a two-dimensional FITS MEF but does not contain ERR or DQ image extensions. There are 2 SCI image extensions, Chip 2 in EXT=1 (CD) and Chip 1 in EXT=2 (AB), in RAW fullframe unbinned size of 2068 x 4144. Storage wise, wouldn't it be more efficient to store this info as a binary table? Ah, well.
  • Test SNKCFILE was provided by Jenna Ryon.

Questions:

  • Why are there no DQ extensions in reference image? It is much easier for INS to calculate however they want to populate the DQ and just deliver the DQ values to populate. It is inefficient for CALACS to recalculate the same thing over and over again based on SNKCFILE's SCI data for every input image. The ISR talks about flagging based on background level but how is CALACS supposed to estimate background blindly? Need science image pixel value to calculate (not global background). Exact flags will be different for each images.
  • Will all WFC images have these new keywords? That is, full reprocessing of static archive. It is mentioned that there "will not be a SNKCFILE before January 2015," but does that mean no keywords or they are set to OMIT and N/A? Yes.
  • Where exactly in CALACS is this correction step going? After BLEVCORR, before PCTECORR, which puts it as the last step in ACSCCD.
  • If it is set to PERFORM but BIASCORR!=COMPLETE, skip silently, error, or warning? Just warning.
  • Extract subarray region for subarrays? It is implied but I want to make sure. Yes.
  • Which pieces of CALWF3 do you want to copy over? wf3ccd/sink.c

Todo:

  • Need to implement a new step for this.
  • Need Jenna's Python flagging script for reference.
  • Need test images and SNKCFILE for regression test. Maybe a fullframe and a subarray. That includes science input, reference file, and expected output.

Update from Norman (2017-02-14)

This has nothing to do with the new CTE code. The intent is to clone the CALWF3 sinkcorr stage into CALACS, with minor modification due to our change in reference file conventions.

We will generate a new sinkcorr reference file every anneal cycle (but only in the flashed-darks era), with appropriately indexed USEAFTER. We also change the head-pixel labeling convention, to reflect the depth of the sink rather than its birthday. We may use a different DQ bit for WFC sinkpix than is used by CALWF3: Jenna and/or David can confirm.

Jenna should be able to walk you through the modifications to the CALWF3 sinkcorr code (perhaps
with a little assistance from David)? She should be able to hand you a prototype WFC sinkcorr reference file immediately.

Related Links

'make clean' removes object files but not the entire build dir

I've just noticed that 'make clean' doesn't remove the compiled objects but instead removes the executable dir. It is 'make distclean' that removes the build folder. This is not normal behavior. In fact for a developer, this is quite bad news.

'make clean' should clean up all built files as well as the final product - anything produced from 'make'.
'make distclean' should exec 'make clean' and also remove the makefiles and other infrastructure produced from ./configure, i.e. anything pre make/compilation/build. Thus returning the dir to its clean distribution state.

Keep in mind #40

EDIT:
'make clean' does remove the object files but simply leaves the, now empty, build dir in place.

Uninstalling HSTCAL

When installed via AstroConda, it can be uninstalled with conda uninstall hstcal. But when installed from source using waf, I cannot uninstall it using pip uninstall hstcal. The only other way is to manually hunt down and delete the executable files. Is there something else that I am not aware of?

wf3cte - add subarray images to regression suite

Within the regression suite for wf3/uvis there are only 6 raw images with PCTECORR = PERFORM, none of which are subarrays, they are all full frame images.

The code to deal with subarrays is drastically different and should not be excluded from the regression suite.

Address the many compilation warnings

When compiling hstcal (at least with clang) there exists a superfluous number of warnings that cloud the build output and make it difficult and tedious to locate those possibly introduced during development.

Making this issue the parent issue to the various sub-issues, addressing the warnings on an individual basis.

#36 (-Wsometimes-uninitialized)
#38 (-Wabsolute-value)

CALACS to work with new OSCNTAB

Originally, this was the suggested fix for #19 (the purist way). First attempted in #21 but went with #22 due to time and resource constraints. However, this should still be done in the long term and folded in with the next time ACS data need to be reprocessed for other reasons (new CTE algorithm, new bias correction, etc).

Note: Initial testing by Matt McMaster indicated that the new OSCNTAB (now defined w.r.t. subarray coordinates for subarrays, not fullframe coordinates) causes BLEVCORR to produce different result from the one in Archive (using CALACS 8.3.4) for aperture WFC1B-2K. More investigations are required.

'make clean' rm's all cfitsio build dir files instead of just built object and library files.

Normally 'make clean' should only remove those files produced from the build i.e. those from using 'make'. Thus a subsequent and new (clean) build attempt should be possible. 'make distclean' on the other hand should revert the build to the pre-configure stage. It turns out that whilst the configure script populates the cfitsio build dir with the src files etc, 'make clean' removes them rather than just removing the object and library files. This results in a subsequent 'make' of hstcal failing due to libcfitsio.a not existing since there were neither files to build (in the build dir) or makefile to do so (at least I think this is what's happening).

Fix failing ACS tests on public builds

This is to be done ASAP after public release for HSTDP 2017.2 delivery. Currently, the tests are failing because they picked up changes to input files meant for dev build, which is not backward compatible. Such conflict is unavoidable due to how the test system is set up historically.

Fix hstcal and acstools on:

  • Linux public
  • Mac public

BLEVCORR cannot handle polarizer/ramp using new subarray format

In the original new subarray change request (#6), only the new apertures were expected to use the new subarray format. Now, polarizers and ramp exposures also use the new subarray format. Therefore, BLEVCORR failed to remove the virtual overscan properly for them. Note that these polarizer/ramp apertures are not new, so we need to be careful to make any change fully backward compatible. ACS Team is deciding on a course of action.

wf3cte unnecessary re-computation of cte correction due to over-subtracted tails

This seems like a bug to me.

https://github.com/spacetelescope/hstcal/blob/master/pkg/wfc3/calwf3/wf3cte/wf3cte.c#L1344

As is, the code checks whether the cte correction has over-subtracted a tail, if so it downscales the cte scaling and re-runs the correction for that given column, however, it is possible for it to re-run the exact same column correction 4 times - 'exact' because it's possible for the conditional checking for the tails to not be reentered (i.e. the first re-run was enough) and thus never downgrading the cte scaling on subsequent re-runs, thus nothing having changed for the other 3 re-runs.

It could be possible that the first re-run was sufficient and as such should prevent further re-computations of the column's correction.

I think REDO should be reset on each loop?
master...jamienoss:unnecessary-re-comp-of-cte-cor-cols-for-over-sub-tails-wf3cte

Ammend the CTE routine for sub-arrays to execute only on subarrays with pre-scan pixels

The rules for turning on PCTECORR for subarrays should also be updated to reflect these supported arrays.

Since users can run the code themselves and also alter the *corr keywords, calwf3 will be updated to run a check against the name of the subarray being processed as backup

It's impossible to perform part c) below in the pipeline, but users may wish to try this and can submit the image as a full frame image to the subroutine.

b) The current routine requires access to pre-scan bias pixels. 
             Subarrays that are currently supported:
             UVIS1-2K4-SUB
             UVIS1-2K2A-SUB
             UVIS1-2K2B-SUB
             UVIS1-2K2C-SUB
             UVIS1-2K2D-SUB
             UVIS2-C1K1C-SUB
             UVIS2-C512C-SUB

         c) There are some subarrays that do not have pre-scan pixels; 
             these are not supported at this time.  However, please see STAN Issue 18  
             for a work-around solution for CTE-correcting unsupported subarrays.

cte smoothing - computation along row or column?

This looks ok to me but wanted to check.

The following code computes some value along each row and compares it to something along the lines of the read-noise (?) rnsig. Is this computing this along rows as it is for the serial transfer read out?

for(j=0; j<RAZ_ROWS; j++){
            nrmsu=setdbl;
            rmsu=setdbl;
            for(i = 0;i<RAZ_COLS; i++){
                if ( (fabs(Pix(raz->sci.data,i,j)) > 0.1) ||
                        (fabs(Pix(rsz->sci.data,i,j)) > 0.1) ){
                    rmsu  +=  ( Pix(rnz.sci.data,i,j) * Pix(rnz.sci.data,i,j) );
                    nrmsu += 1.0;
                }
            }
            #pragma omp critical (rms)
            {   rms  += rmsu;
                nrms += nrmsu;
            }
        }
        rms = sqrt(rms/nrms);

        /*epsilon type comparison*/
        if ( (rnsig-rms) < 0.00001) break; /*this exits the NIT for loop*/

This should definitely be computed along a row and not a column, right?

Add build configure option --O3

This invokes the compiler optimization flag -O3.

--debug takes precedence.

Upon use, this spawns a warning message requesting user input to either continue or abort configuration.

wf3cte - uninitialized pointer causes free of unallocated memory.

The following code excerpts track the code path leading to a segfault when a subarray is passed in as input. The issue stems from an uninitialized pointer HdrArray * array which is a member of Hdr. This opens the door to incorrect NULL checking, which in this instance, lead to an attempted double free.

Declaration of Hdr scihdr here, that is never initialized before being referenced here when passed into getHeader(). getHeader(), present in hstio.c, then calls allocHdr(), here, and as such the following code tests the 'garbage' in h->array to be true, causing the free of unallocated memory.

...
 if (h->array == NULL || h->nalloc != n) {
     if (h->array != NULL)
         free(h->array);
...

This issue has always existed however, it had been intentionally masked by the use of -fstack-protector-all having being added in the following commits 13fc8aa and ed26e35, with the following message...

  • moved the regular compile from optimization 1 to 2, added -fstack-protect-all flag, still need to figure out where exactly the stack buffer overflow is happening, but its' consistently producing the correct results for now at least.

However, this compiler option is not set for a debug build, which is how I found it. I am yet to figure out how using -fstack-protector-all prevents the issue in the first place? It must be doing something more than you'd think it 'should' be doing, obviously.

I strongly advise that this flag is removed. If we want to keep it we should open an issue explicitly outlining our case study for needing it, i.e. security etc and thus have this documentation track the change via 'git blame'.

Whilst on the subject, I also strongly advise that we stop writing C code via the C89 standard or as if it's Fortran. Declarations should be left as late as possible, i.e. on the lines just before first being referenced or outside of the required scope, instantly followed by initialization. When we finally get around to removing all of the uninitialized variable warnings, we should promote this warning to an error.

Add correction factor to FLSHCORR step in CALACS

Details from Andrea and Norman below. Targeted for OPUS delivery in Spring 2017.

Norman

In the FLSHCORR stage, there is a point at which the flash frame is input -- before subtraction from the flashed image to be corrected.

At that point, before the subtraction, we need to multiply each quadrant of the reference image by a separate correction factor that is dependent on the FLASHDUR keyword in the image to be corrected. Then the subtraction can occur, as usual (including the scaling of the tweaked reference image by the FLASHDUR).

If only a portion of that image is read in, because we are dealing with a subarray, then only apply the correction factor suitable to that subarray.

...

Think of this request as a correction to the flash reference file, rather than as a correction to flashed imagery.

If FLSHCORR is being applied equivalently to old and new subarray data --- which it should --- then this correction must also be applied to old and new subarray data.

If there is already some logic in place to handle flash-subtraction on jumbo/displaced subarrays (spanning the CCD mid-column line) [PL: No such thing is being done], then preserve that logic but adjust each quadrant of the (full-frame) flash reference image before excerpting to FLASHDUR-scale and subtract.

Andrea

The equations for AMPS A, B and D are of the form:

f(x) = 1 - ( a * sinh(b * x - 1) /  exp( c * x - 1) )

For AMP C:

f(x) = 1 + ( a / (x ** b + c) )

where x in the equations is the value of the FLASHDUR keyword in the header.

The coefficients are:

AMP A:

a = -0.27143973
b = 0.89499787
c = 1.64930466

AMP B:

a = -0.38018589
b =  1.63426073
c = 2.19362160

AMP C:

a = 0.04802005
b = 2.12968265
c = 0.01922919

AMP D:

a = -0.32215275
b = 1.19033580
c = 1.59900158

screenshot

wfc3cte - possible incorrect conversion from DN to electrons

The following code:

 gain=wf3->ccdgain;

    /*REFORMAT TO RAZ*/
    makeRAZ(cd,ab,raz);


    /*SUBTRACT THE EXTRA BIAS CALCULATED, AND MULTIPLY BY THE GAIN
      Note that for user subarray the image is in only 1 quad, and only
      has prescan bias pixels so the regions are different for full and subarrays
    */
    if (wf3->subarray){
        findPreScanBias(raz, bias_pre, bsig_pre);
        for (k=0;k<4;k++){
            for (i=0; i<subcol;i++){
                for (j=0;j<RAZ_ROWS; j++){
                    if(Pix(raz->dq.data,i+k*subcol,j))
                        Pix(raz->sci.data,i+k*subcol,j) -= bias_pre[k];
                        Pix(raz->sci.data,i+k*subcol,j) *= gain;
                }
            }
        }
    } else {
        findPostScanBias(raz, bias_post, bsig_post);
        for (k=0;k<4;k++){
            for (i=0; i<subcol;i++){
                for (j=0;j<RAZ_ROWS; j++){
                    Pix(raz->sci.data,i+k*subcol,j) -= bias_post[k];
                    Pix(raz->sci.data,i+k*subcol,j) *= gain;
                }
            }
        }
    }

    return(status);

Here this uses the command gain WF3Info::ccdgain rather than the actual amp gain WF3Info::atodgain[].

calacs should conform to the version flags

It looks like calacs was the only one not updated to produce the version strings and version numbers with separate flags, i.e.

calxxx -r should produce the full revision string
calxxx --version should produce just the version id

WFC3:

(astroconda) bash-3.2$ calwf3.e -r
Current version: 3.4(28-Sep-2016)
(astroconda) bash-3.2$ calwf3.e --version
3.4

ACS:

(astroconda) bash-3.2$ calacs.e -r
Unrecognized option -r
(astroconda) bash-3.2$ calacs.e --version
8.3.5 (21-Oct-2016)

COS:

(astroconda) bash-3.2$ calcos --version
3.1.8
(astroconda) bash-3.2$ calcos -r
3.1.8 (2016-06-29)

STIS:

(astroconda) bash-3.2$ cs0.e -r
3.4 (13-November-2013)
(astroconda) bash-3.2$ cs0.e --version
3.4

HSTDP is using these returned values.

PR87233

Address pedestal effect on ACS/WFC AMP A

Status: No action required

Andrea, "... the residuals for amp A are still ~0.35 electrons too negative (see attached plot)."

bellini_ampa_pedestal

After some discussions, it was decided that this should be addressed separately from #23 and more investigation is needed to ensure this offset is consistent across epochs, readouts, apertures, etc.

wf3cte - omp_get_num_procs is sometimes incorrect

On OSX 10.11 (El Capitan), varying the number of cores available, logical (hyperthreaded) or physical, is not correctly reflected in the returned value from omp_get_num_procs or omp_get_thread_limit. Varied using xcode's 'Instruments' functionality.

wf3cte - boundary columns not being smoothed & those adjacent being so twice.

The following code seems incorrect.

...
for (j=0; j<RAZ_ROWS; j++){
   if(Pix(raz->dq.data,imid,j)) {
      find_dadj(1+i-imid,j, obs_loc, rsz_loc, rnsig, &dptr);
      Pix(zadj.sci.data,i,j) = dptr;
   }
}
...

I believe that the conditional should be referencing pixel [i, j] and not [imid, j] as it does. This code is for the pre-smoothing of images before correcting for cte.

As is, I believe, that the 1st & last columns of a subarray will not be adjusted/smoothed and that in addition the second & second to-last columns will be adjusted/smoothed twice. S

Vertical post-scan identification of readout CRs

We should limit the search for readout CRs to the actual data pixels. It often happens that a bright object near the top of the detector fills a lot of traps then when the trails are subtracted in the post-scan (which starts with zero electrons), there is enough of an excess to identify the bright object in the image as a readout CR, and to lower the CTE correction for the object. This can be traced to two issues. First is that each column has a different number of traps the model doesn't always accurately account for that. The second is that the zero-background level of the post flash is a very difficult environment for the correction to work in... expecting our model of the CTE tails to work perfect there is asking a lot... so a little over-subtraction shouldn't be surprising. Solution: just probe the first 2048 pixels for trail-oversubtraction.

wf3cte possible missing break statement

The following code

/*GO DOWNSTREAM AND LOOK FOR THE OFFENDING CR*/
    for (jj=j-10; jj<=j;jj++){
        if ( (pix_modl[jj] - pix_obsd[jj]) > (pix_modl[jmax] - pix_obsd[jmax]) ) {
            jmax=jj;
         }
    }

Should there be a break after jmax = jj or some other logic depending on what is really being looked for here?

wfc3 - fix association trailer output file so that all members are copied

Currently the primary association trailer, *0.tra contains two copies of the first members trailer file information, this should be changed to include a copy from each members trailer file. All the other trailer file seem fine.

This will be delivered to the archive in the fall with the subarray updates

waf not compatible with Python 3?

I get this error running ./waf configure --debug from Anaconda env with Python 3.5.1. ๐Ÿ˜ฑ

Checking for 'gcc' (c compiler)          : ok 
Checking for 'gfortran' (fortran compiler) : ok 
Traceback (most recent call last):
  File ".../waflib/Scripting.py", line 93, in waf_entry_point
    run_commands()
  File ".../waflib/Scripting.py", line 145, in run_commands
    run_command(cmd_name)
  File ".../waflib/Scripting.py", line 138, in run_command
    ctx.execute()
  File ".../waflib/Configure.py", line 127, in execute
    super(ConfigurationContext,self).execute()
  File ".../waflib/Context.py", line 87, in execute
    self.recurse([os.path.dirname(g_module.root_path)])
  File ".../waflib/Context.py", line 127, in recurse
    user_function(self)
  File ".../wscript", line 149, in configure
    conf.check_fortran()
  File ".../waflib/Configure.py", line 220, in fun
    return f(*k,**kw)
  File ".../waflib/Tools/fc_config.py", line 32, in check_fortran
    self.check_cc(fragment=FC_FRAGMENT,compile_filename='test.f',features='fc fcprogram',msg='Compiling a simple fortran app')
  File ".../waflib/Configure.py", line 220, in fun
    return f(*k,**kw)
  File ".../waflib/Tools/c_config.py", line 450, in check_cc
    return self.check(*k,**kw)
  File ".../waflib/Configure.py", line 220, in fun
    return f(*k,**kw)
  File ".../waflib/Tools/c_config.py", line 356, in check
    ret=self.run_c_code(*k,**kw)
  File ".../waflib/Configure.py", line 220, in fun
    return f(*k,**kw)
  File ".../waflib/Tools/c_config.py", line 434, in run_c_code
    bld.compile()
  File ".../waflib/Build.py", line 191, in compile
    self.store()
  File ".../waflib/Utils.py", line 302, in f
    ret=fun(*k,**kw)
  File ".../waflib/Build.py", line 165, in store
    cPickle.dump(data,f)
AttributeError: Can't pickle local object 'Context.__init__.<locals>.node_class'

In a different Anaconda environment with Python 2.7.12 on the same machine, it was successful:

Checking for 'gcc' (c compiler)          : ok 
Checking for 'gfortran' (fortran compiler) : ok 
Compiling a simple fortran app             : yes 
Checking for OpenMP                        : OpenMP found. 
Checking for sizeof(int)                   : yes 
Using cfitsio                              : local 
Checking for program make                  : /usr/bin/make 
Copying cfitsio                            : done 
Configuring cfitsio                        : done 
Checking for compiler flags -g             : yes 
Checking for compiler flags -O0            : yes 
Checking for compiler flags -Wall          : yes 
'configure' finished successfully (5.457s)

hstio.c:getFloatData - weird logic regarding data dimensions & population

The following code looks odd to me. If the number of dimensions, no_dims == 0, it looks like the code tries to work out the actual number (I'm presuming this is some sort of recovery action) and the passed in data array, da, can have all 2D elements populated. Ok, so I guess that looks ok(?), however, for no_dims == 1 there is no loop and only PPix(da, 0, 0) is accessed. Similarly, for no_dims == 2 only a single column is populated. Is it just me or does this seem the wrong way around? What have I missed?

       ...

       if (no_dims == 0) {
            kw = findKw(iodesc->hdr,"PIXVALUE");
            if (kw == 0) { ioerr(BADSCIDIMS,iodesc, 0); return -1; }
            val = getFloatKw(kw);

            kw = findKw(iodesc->hdr,"NPIX1");
            if (kw == 0) { ioerr(BADSCIDIMS,iodesc, 0); return -1; }
            iodesc->dims[0] = getIntKw(kw);

            /* If NPIX2 is not found, the image should be 1D; dim2 = 1 and *
             * not 0 for purposes of memory allocation.                    */
            kw = findKw(iodesc->hdr,"NPIX2");
            if (kw == 0)  {
                iodesc->dims[1] = 1;
            } else {
                iodesc->dims[1] = getIntKw(kw);
            }

            if (allocFloatData(da, iodesc->dims[0], iodesc->dims[1])) return -1;
            for (j = 0; j < iodesc->dims[1]; ++j) {
                for (i = 0; i < iodesc->dims[0]; ++i) {
                    PPix(da, i, j) = val;
                }
            }
        } else if (no_dims == 1) {
            iodesc->dims[1] = 1;
            fits_get_img_equivtype(iodesc->ff, &type, &status);
            /* CFITSIO TODO: Should we verify the type is correct
               here?  Original code gets type, but then does nothing
               with it. */
            if (allocFloatData(da, iodesc->dims[0], iodesc->dims[1])) return -1;
            fpixel[0] = 1;
            fpixel[1] = 1;
            if (fits_read_pix(iodesc->ff, TFLOAT, fpixel, iodesc->dims[0], 0,
                              (float *)&(PPix(da, 0, 0)), &anynul, &status)) {
                ioerr(BADREAD, iodesc, status);
                return -1;
            }
        } else if (no_dims == 2) {
            fits_get_img_equivtype(iodesc->ff, &type, &status);
            /* CFITSIO TODO: Should we verify the type is correct
               here?  Original code gets type, but then does nothing
               with it. */
            if (allocFloatData(da, iodesc->dims[0], iodesc->dims[1])) return -1;

            fpixel[0] = 1;
            for (i = 0; i < iodesc->dims[1]; ++i) {
                fpixel[1] = i + 1;
                if (fits_read_pix(iodesc->ff, TFLOAT, fpixel, iodesc->dims[0], 0,
                                  (float *)&(PPix(da, 0, i)), &anynul, &status)) {
                    ioerr(BADREAD,iodesc, status);
                    return -1;
                }
            }
        } else {

       ...

Notes for speedup of cte code

DMS would like the code to run faster, but they don't want to use the parallel processing inside the code, rather they want to use the coarse grained parallel processing in HTCondor. So these are notes looking at speeding up the algorithm for a single processor.

For later reference, from the performance profiling I did a couple months ago:

  • pulling find_dadj into the upper function counts down on calls, so it's a reasonable update, but doesn't make a huge timing difference.
  • condensing the logic loops into statements with fewer resulting instructions made a small difference
  • rsz2rsc and sim_colread show up here as the beasties ; there is some complicated logic there so going through to compact it will take some thought. Below is the short figure, but I think the call structure is not quite right, perhaps the way I did the profile, go back and check.

calwf3_speed_shortlist

Also need to go back and do a few different types of runs to validate these numbers; the current eye-bleeding figure:

output

Making this an issue now, but we may expand to a project if ACS and WFC3 want to work on it together. The earliest any updates from this work would go into DMS is summer 2017.

wf3cte possible unwanted sign inversion in correction and/or dampening computation

https://github.com/spacetelescope/hstcal/blob/master/pkg/wfc3/calwf3/wf3cte/wf3cte.c#L1289-L1295

for (j=0; j< RAZ_ROWS; j++){
    dmod =  (pix_obsd[j] - pix_read[j]);
    if (NITINV < cte->n_forward){
       dmod *= (dmod*dmod) /((dmod*dmod) + (cte->rn_amp * cte->rn_amp));
    }
    pix_modl[j] += dmod; /*dampen each pixel as the best is determined*/
}

The above dampening code comes after the main loop iterating over the number of readout transfers (per column) calling sim_colreadout_l(pix_curr, pix_read, pix_ctef, cte); - this section computes the readout simulation for a given column.

Within each iteration pix_curr is copied from pix_read
https://github.com/spacetelescope/hstcal/blob/master/pkg/wfc3/calwf3/wf3cte/wf3cte.c#L1283

Before each loop pix_obs and pix_model are copied from the same source. This effectively gives the below pseudo code:

...
pix_observed[:] = input[:];

for ()
{
    sim_colreadout_l(pix_model, pix_observed, ...); //pix_model is returned as pix_obereved + cte correction
}


//Dampen
delta = pix_observed[:] - pix_model[:]
if (dampen) 
    delta = delta* abs(delat_scale);
pix_model[:] = pix_observed[:] + delta;

...

The issue here is whether the delta should actually be pix_model - pix_obsereved? Below shows the math:

model = observed + cteCorrection //result from sim_colreadout

delta = obsereved - model = obeserved - obsereved - cteCorrection = -cteCorrection

model = observed + (-cteCorrection)*(+dampenFraction)

Let's always assume that the following is true dampenFraction = 1, but still keep it's current code implementation, then before the dampening we have:

model = observed + cteCorrection

yet after (the dampening code) we have:

model = observed - cteCorrection

this is the principle issue.

As a small clarification aid, the following two code changes do not change the output results.

...
/*DAMPEN THE ADJUSTMENT IF IT IS CLOSE TO THE READNOISE, THIS IS
   AN ADDITIONAL AID IN MITIGATING THE IMPACT OF READNOISE*/
for (j=0; j< RAZ_ROWS; j++){
    //dmod =  (pix_obsd[j] - pix_read[j]); //ORIGINAL CODE
    dmod =  (pix_read[j] - pix_obsd[j]); //CHANGED CODE
    if (NITINV < cte->n_forward){
        dmod *= (dmod*dmod) /((dmod*dmod) + (cte->rn_amp * cte->rn_amp));
    }
    //pix_modl[j] += dmod;  //ORIGINAL CODE
    pix_modl[j] -= dmod;  //CHANGED CODE
}
...

The above code is the following: https://github.com/spacetelescope/hstcal/blob/master/pkg/wfc3/calwf3/wf3cte/wf3cte.c#L1287-L1295

What is questionable here is what is doing the actual correction, the function sim_colreadout_l or the dampening code? What leads you to this question is that for the last forward iteration, NITINV == cte->n_forward, the delta computed from sim_colreadout_l (that is the correction itself) is not dampened but only subtracted from the image rather than being added to it. This, therefore, raises the question of whether sim_colreadout_l is correct, as the dampening code is doing the subtraction and not sim_colreadout_l.

WFC3: Update photometry keyword descriptions

The WFC3 team would like to change the comments for some of the photometry keywords present in the headers of WFC3/UVIS images, to clarify the definitions for users. Note that the keywords themselves are not changing, just the comments on them. There is a 48 character limit. The changes to the keyword comments are listed below:

EXT Keyword Current Proposed
0,1,4 PHOTCORR populate photometric header keywords NO CHANGE
0,1,4 FLUXCORR convert to absolute flux units Scale UVIS2 to match UVIS1 using PHTRATIO
0 PHOTFLAM inverse sensitivity Inverse sensitivity, ergs/cm2/A/e-/
1,4 PHOTFLAM Inverse sensitivity, ergs/cm2/A/e-/ NO CHANGE
0 PHOTZPT zero point ST magnitude zero point
1, 4 PHOTZPT zero point ST magnitude zero point
0 PHTFLAM1 photometry scaling for chip 1 Chip1 Inv Sens, same as PHOTFLAM
1,4 PHTFLAM1 PHOTFLAM on chip1 when scales are not uniform Chip1 Inv Sens, same as PHOTFLAM
0 PHTFLAM2 photometry scaling for chip 2 Chip2 Inv Sens, use when FLUXCORR=OMIT
1,4 PHTFLAM2 PHOTFLAM on chip2 when scales are not uniform Chip2 Inv Sens, use when FLUXCORR=OMIT
0 PHTRATIO photometry scaling for chip2 to chip 1 PHTFLAM2/PHTFLAM1 ratio
1,4 PHTRATIO Ratio of PHTFLAM1/PHTFLAM2 PHTFLAM2/PHTFLAM1 ratio
1,4 PHOTMODE observation con Observing configuration
1,4 PHOTFNU inverse sensitivity, Jy*Sec/electron Inverse sensitivity, Jy*sec/e-
1,4 PHOTPLAM pivot wavelength NO CHANGE
1,4 PHOTBW RMS bandwidth NO CHANGE

There are default description strings in calwf3 which are used when the keyword is being added to a header where it doesn't already exist. This update makes the text consistent among all headers, including the global header keywords which calwf3 adds for user convenience which are not populated in the raw headers on conversion.

See PR86371

CALACS PHOTCORR cannot open w3m17171j_imp.fits

PHOTCORR step in ACS2D failed with an error like below:

PHOTCORR PERFORM
Found parameterized variable 1.
NUMPAR=1, N=1
Allocated 1 parnames
Adding parameter mjd#56663.9121 as parnames[0]
==> Value of PHOTFLAM = 3.1489898e-19
==> Value of PHOTPLAM = 4327.3009
==> Value of PHOTBW = 298.16921

==>ERROR: IMPHTTAB extension`jref$w3m17171j_imp.fits[PHTFLAM1]' not found.
*** Error in OpenPhotTab 114
ERROR:    Error return from GetPhotTab.
ERROR:    Error processing jc8o04gh2_blc_tmp.fits.
ERROR:             IRAF error 301:  c_tbtopn:  couldn't open file
ERROR:    Open failed.
INFO:acs_destripe_plus:Done.
FLT: jc8o04gh2_flt.fits
FLC: jc8o04gh2_flc.fits

Discussion is ongoing with REDCAT team to fix the reference file.

wf3cte - BIG BOSS LEVEL COMPLETE!

This is it, the bug in wf3cte.

...
  if (i==1 &&  RAZ_ROWS-1>=j  && j>0 ) {

        dval9 = obsloc[i][j-1]  - rszloc[i][j-1] +
            obsloc[i][j]    - rszloc[i][j]  +
            obsloc[i][j+1]  - rszloc[i][j+1] +
            obsloc[i-1][j-1]- rszloc[i-1][j-1] +
            obsloc[i-1][j]  - rszloc[i-1][j] +
            obsloc[i-1][j+1]- rszloc[i-1][j+1] +
            obsloc[i+1][j-1]- rszloc[i+1][j-1] +
            obsloc[i+1][j]  - rszloc[i+1][j] +
            obsloc[i+1][j+1]- rszloc[i+1][j+1];
    }
...

The conditional should be RAZ_ROWS-1>j NOT RAZ_ROWS-1>=j.

Clarification edit:
With RAZ_ROWS-1=>j for j = RAZ_ROWS-1 => any j+1 index (here, here, & here) makes j = RAZ_ROWS, this is C with zero based indices and as such the RAZ_ROWSth element doesn't exist.

UPDATE:
Any garbage read by this access is luckily bound by +/- 0.33*read_noise in the following code

...
dval9 =dval9 / 9.;
dval9u = dval9;

if (dval9u > (rnsig*0.33))
    dval9u =  rnsig*0.33;
if (dval9u <  rnsig*-0.33)
    dval9u = rnsig*-0.33;
...

The compiler flag -fstack-protector-all was added to the main build script in order to 'somehow' fix the issue. This is the responsible commit (github), svn, and Trac issue . Here is the commit message:

  • moved the regular compile from optimization 1 to 2, added fstack-protect-all flag, still need to figure out where exactly the stack buffer overflow is happening, but its' consistently producing the correct results for now at least.

What makes no sense to me is that if the bug truly was a buffer overflow then adding stack protection would only abort the execution upon detecting an overflow (writing beyond the allowed stack space), i.e. if it was segfaulting before (from corrupted memory) it would now only abort - either way, it still 'crashes'. This compiler flag could NEVER actually 'fix' an overflow issue. As it turns out the bug in this issue is actually a read and not a write so it still wouldn't trigger the stack protector to abort, unless these are different bugs.

I identified the relation to stack protection when I experienced the bug, tried to debug with --debug, a build that uses -O0 and no stack protection. After determining that it wasn't directly the optimization difference but the use of stack protection in the release -O2 build, I came to the above conclusion.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.