Giter VIP home page Giter VIP logo

Comments (7)

jonwright avatar jonwright commented on September 24, 2024

It depends a bit on the microstructure and whether the grains are larger or smaller than the overlapping region. Usually we know how we moved the sample during the experiment, but you can end up fitting if the motors are not so great. If it is a general affine transform I would be a bit worried as you seem to mess around with the orientations too? Is it position matching or also orientation? Previous cases I am aware of:

  • With large grains and multiple slices per grain you can sometimes see orientation and strain changing as a function of height in the grain
  • With small grains and large overlaps you mostly see the same grain twice, once in each slice, and it should look the same each time (a few get clipped)
  • With "grain size" matching "overlap size" then most of them get are getting clipped

There was an idea to offset the image in "z" parallel to the translation, or offset the peaksearch output, as a way to deal with this.

You ideally need to look at the diffracted intensities and make the intensity weighted sum as a function of height. I think most people go for something like centre of gravity weighted by intensities. To simplify that I think we usually avoid overlapping the data collection (or do exactly 50% overlap).

Probably there is no script doing this properly in here, although there is a bunch of related work on the S3DXRD folder which is scanning in y/omega to build sinograms and reconstruct them (overlaps are in y instead of z and somewhat entangled).

from imaged11.

jonwright avatar jonwright commented on September 24, 2024

Feel free to drop it into the "sandbox" folder here if you like, unless you think it is documented and user friendly enough to put somewhere more easy to find :-)

from imaged11.

jadball avatar jadball commented on September 24, 2024

In our case, we defined our beam shape to be full-width and 150 microns high.
Our sample was translated vertically in the beam (keeping the detector distance fixed) in 100 micron steps, so our overlap is 50 microns.
Thinking about it I'd give an average grain size of around 80 microns.

Here's the way the program thinks about it:

  • Take map 1, and map 2 which is below map 1 on the sample (so sample has translated up 100 um)
  • Apply an initial guess translation to the co-ordinates of the second map grains, to move them down by 100 um
  • Using the match.py script (considers position and orientation), find matching grain pairs in maps 1 and the translated map 2
  • Feed those matched grain pairs into the ICP algorithm which then calculates the affine transformation (translation and 3D rotation).

In my case I've found an initial guess of 100 um to give around 91 um which doesn't seem too bad. The resultant rotations are very small (each rotation matrix element is within 0.01 of 1 or 0).

from imaged11.

jadball avatar jadball commented on September 24, 2024

May you please go into further detail on the intensity weighted sum?
We took each letterbox as a separate scan, so per loading step we have 10 scans of 150 microns each with a 1 mm total height scanned.

from imaged11.

jadball avatar jadball commented on September 24, 2024

(or we can continue via email?)

from imaged11.

jonwright avatar jonwright commented on September 24, 2024

Email is fine but this might be useful for the next person to do the same job...?

When we do the experiments we normally set up the scan to make slices in z that are do not overlap, or overlap by 50%. I think this explains the reason why, See the end for some potential work-arounds.

From makemap.py you get a line per grain which is "intensity_info". The tthrange argument to the script decides which peaks are used to get this from the data. You can also just look at the .flt.new file and see the peaks assigned to each grain (labels column). The intensity should be proportional to the illuminated grain volume and you can normalise with Lorentz factor and structure factors when they are known.

Assuming you somehow got intensities/sizes per grain, for the case where scans do not overlap you just take sum(intensity * height)/sum(intensity) over all the slices (intensity normalisation cancels out). If there is 50% overlap, use every other slice and run the code twice for odd versus even (results should match).

In your case there is a partial overlap. I guess there are 3 contributions:

  • part of grain in upper scan only (fu = fraction upper)
  • part of grain in both scans (fb = fraction both)
  • part of grain in lower scan only (fl = fraction lower)

If the intensity matches between the two scans then things look symmetric and you can have two distinct cases:

  • fu = fl =0; all is contained in fb, heights should match.
  • (fu = fl) > 0; fb is partial, heights will be different in each scan.

Note that your grain can be long and thin (columnar, or 3D printed), so the overall size doesn't always help to distinguish. In either case, the centre of gravity for the pair total is just the average of the two heights. So for this case you have a final height and a height mismatch between the two scans which tells you something about the grain size in z. The more there is a mismatch in height, the larger the grain should be in z.

If the intensity is higher in one scan (e.g. fu > fl) compared to the other, there are still two cases:

  • fl = 0; grain is only in top region and overlapped zone
  • fl > 0; extends to both regions, but is not symmetric

Trying to work through this algebraically... Resulting height should be the weighted average coming from the fractions and the heights of the parts unique to each scan (l=lower, b=both, u=upper):

  • height from lower scan = (fb.zb + fl.zl) / (fb+fl)
  • height from upper scan = (fb.zb + fu.zu) / (fb + fu)
  • intensity from lower scan = fb + fl
  • intensity from upper scan = fb + fu

I am afraid you have a problem of 6 unknowns and only 4 observations?

If you set "fl=0" you can solve as zu is now irrelevant. It comes down to just picking the answer from the scan which has the higher intensity and saying the other scan was "clipped". If the grain actually extends into the lower scan then the answer comes out "wrong" because both scans are clipped. Usually no problem if the grain is smaller than the overlap region, but problematic when it is the same size.

Some gotchas come from twinning and otherwise "challenging" microstructures. For example, you can have two small grains that happen to have the same orientation, but a significant amount of space between them (e.g, part was transformed or removed by a fib, etc). In that case you might have fb=0 and they should not be paired. With twinning (more common), a bunch of the reflections are overlapped with another grain that has an orientation relationship and you struggle to get sensible grain sizes due to systematic overlap.

Some work arounds:

  • assume a grain shape (e.g. sphere) and compute the volumes after normalising for intensity (sum of all grains adds up to volume scanned)
  • try to fill in a Voronoi or Laguerre tesselation. Hamid's code will probably help a lot, see https://github.com/FABLE-3DXRD/3DXRD_processing

from imaged11.

jadball avatar jadball commented on September 24, 2024

Thanks so much for your reply!

I will look into this in more detail in the new year once I'm back at work.

If we assume we can trust our vertical displacement readback from our vertical translation stage (it's encoded), does that reduce one of the unknowns?

from imaged11.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.