Giter VIP home page Giter VIP logo

species's People

Contributors

mperrin avatar tomasstolker avatar trellixvulnteam avatar vandalt avatar wbalmer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

species's Issues

FitModel including disk_teff and disk_radius: TypeCheckError

Following the great (=useful and clear) tutorial at https://species.readthedocs.io/en/latest/tutorials/fitting_model_spectra.html but modifying it a bit, I am doing:

fit = FitModel(object_name='beta Pic b', model='atmo-neq-weak',
  inc_phot=False, inc_spec=True, fit_corr=None, apply_weights=False,
  bounds={'teff': (1500., 1800.),
          'radius': (0.5, 2.),
          'GPI_Y': ((0.5, 1.5), None),
          'disk_teff': (100,1e4),
          'disk_radius': (0.1,1e3)} )

i.e., setting up an atmospheric model and adding a (very free!) blackbody. Then, I do (purposefully leaving the mass very free: 10±9 MJ, i.e., 1–19 MJ to 1 sigma):

fit.run_ultranest(tag='betapic', min_num_live_points=500,
    output='betapic/', prior={'mass': (10., 9.)})

or

fit.run_multinest(tag='betapic', n_live_points=500,
    output='betapic_2', prior={'mass': (10., 9.)})

but both lead to

TypeCheckError: the return value (float) is not an instance of numpy.float64

If using ultranest, the program stops at the error, whereas multinest (fortunately!) ignores it and prints it sixteen bazillion times (I counted them; doing Ctrl+C does not interrupt the run either…) but otherwise works.

The problem does not occur (the fitting runs as advertised) if not including disk_teff and disk_radius.

Making unpacking/storing of atmospheric models more memory-efficient

  1. When doing several times within a session or across sessions but in the same directory:
database.add_model(model='atmo-neq-weak', teff_range=(1500., 1800.))

changing the bounds of teff_range or not, I always see:

Unpacking ATMO NEQ weak model spectra (276 MB)... [DONE]

and similarly for tar files for other atmospheric models. I am not sure that the files really are getting unpacked again but it is taking a long time (e.g. for the 16-GB file exo-rem-highres.tgz), whereas I guess species should only need to check whether the files needed are present (unpacked) or not. If not, maybe only the needed ones could be extracted from the tar file, or at worse everything unpacked again (the current behaviour), or the user could be offered for the files to be interpolated (as is currently done for files that are indeed missing).

  1. Would it be sensible to have the possibility of storing the atmospheric models centrally? (Keep an option for local copies in case it makes sense for some uses.) I have run folders at different locations for different projects but in every directory there are copies of the spectra (unchanged since years…), requiring downloading every time.

  2. Unless I am mistaken, currently all files from a tarball are extracted instead of only the ones required from a database.add_model() call. Would it not make sense to extract only the needed spectra? And especially for work with high-resolution spectra, it might even make sense to offer the option of partial spectral files (i.e., extract each needed *_spec.dat file but delete it ca. right away, keeping only an appropriately-named partial-range file). This way, one could explore a bigger range (and/or a bigger number) of atmospheric parameters (Teff, logg, cloud parameters, etc.) but over a small wavelength range without memory explosion such as:

RuntimeError: Disable slist on flush dest failure failed (file write failed: time = Mon Jan 29 16:17:50 2024
, filename = 'species_database.hdf5', file descriptor = 22, errno = 5, error message = 'Input/output error', buf = 0x3c7a868, total write size = 664, bytes this sub-write = 664, bytes actually written = 18446744073709551615, offset = 0)

I had been warned:

UserWarning: Adding the full high-resolution grid of Exo-Rem to the HDF5 database may not be feasible since it requires a large amount of memory

It might often suffice to limit the Teff range (the main culprit) but maybe not always. Not everyone has access to unlimited memory and smaller files = faster reading in, etc..

Sorry if these are naive beginner's questions showing that I do not understand the deeper structure of species :). I am hoping they will at least be useful to others, through your comments here and/or in the documentation (which is already very good)!

[bug] Flux-To-Magnitude conversion: error propagation

The conversion from flux to magnitude does not correctly calculate the uncertainty if the uncertainty is greater than the flux measurement, and the magnitude uncertainty will be nan (tries to take the log of a negative number).

Suggest adding a check if error>flux in line 474 of species.analysis.photometry, and if so, only return error_app_lower.

[bug] Writing to hdf5 file fails with run_multinest and run_ultranest.

species == 0.5.5
h5py==3.8.0
python~=3.10.10 and similar

When running a grid fit as in the tutorial, the fit.run_ultranest and fit.run_multinest functions fail to output correctly to the hdf5 file, returning the error TypeError: Object dtype dtype('O') has no native HDF5 equivalent. This error persists across the use of multiple model grids and companion choices.

The printed outputs are reasonable, providing median and max likelihood parameters, and some samples are clearly being written as the database.get_mcmc_spectra and species.plot_posterior work fine. However, I think this initial output issue might cause problems with the species.plot_spectrum function, which fails to fully complete the plot, and instead gives the error ValueError: zero-size array to reduction operation minimum which has no identity.

Higher-order interpolation?

I wanted to try out higher-order interpolation by changing interp_method in species_config.ini but get: ValueError: Method 'pchip' is not defined or ValueError: There are 3 points in dimension 2, but method cubic requires at least 4 points per dimension. or ValueError: There are 3 points in dimension 2, but method quintic requires at least 6 points per dimension. from SpeciesInit().

The first question is whether (thinking here especially of FitModel()) it really can be useful to use something other than linear. If yes, would it make sense to let the interpolation method be argument (overriding the default) in the functions? Or does the user need to re-initialise the whole species package when trying out different orders? Comparing on one plot the importance of the interpolation method (as a check) would requiring saving the data, chaning the order in the config file, re-initialising, and reading the data back in. But hopefully linear interpolation is always ok anyway so that this test does not need to be done often 😉.

Can not find the atmosphere model associated with the 'linder2019-petitCODE-metal_0.0' when running get_photometry

When I try to use the get_photometry function with the linder2019 grid, I get the following error: ValueError: Can not find the atmosphere model associated with the 'linder2019-petitCODE-metal_0.0' evolutionary model.

Getting the isochrones with get_isochrone seems to work though, odd!

Here's some code to reproduce the error.

import species
species.SpeciesInit()
database = species.Database()

## This works
# database.add_isochrones(model='atmo')
# tag = 'atmo-ceq'

## This doesn't
database.add_isochrones(model='linder2019') 
tag = "linder2019-petitCODE-metal_0.0"

read_iso_bex = species.ReadIsochrone(tag=tag)
phot_box = read_iso_bex.get_photometry(age=24., mass=1., distance=20., filter_name='JWST/NIRCam.F360M')

I make sure to pull the most recent commit before testing.

Adding an arbitrary offset and scaling to atmospheric models

When fitting photometry or spectra (as in https://species.readthedocs.io/en/latest/tutorials/fitting_model_spectra.html), would it be possible, for exploratory purposes, to have the possibility of fitting for an abitrary linear flux term and for a scaling of the theoretical spectrum (e.g. PHOENIX, BT-Settl, ATMO, Exo-REM, etc.)? This can be currently partly done by adding a blackbody component when fitting (using disk_teff and disk_radius in bounds= for FitModel()) but this component cannot yield a negative flux; if the model has the same spectral features (as strong peak-to-peak) as the data but is too bright, adding a blackbody will not work.

Thanks, and I hope this makes sense :)!

Installation error: No matching distribution found for matplotlib~=3.8.0

Hello! I am trying to install species on another machine (where I do not have sudo rights) and get:

[…]/species$ pip install -e .
Defaulting to user installation because normal site-packages is not writeable
…  # many things that get installed without problem
INFO: pip is looking at multiple versions of species to determine which version is compatible with other requirements. This could take a while.
ERROR: Ignored the following versions that require a different python version: 2.2.2 Requires-Python >=3.9; 2.2.2rc3 Requires-Python >=3.9; 3.8.0 Requires-Python >=3.9; 3.8.0rc1 Requires-Python >=3.9; 3.8.1 Requires-Python >=3.9; 3.8.2 Requires-Python >=3.9; 3.8.3 Requires-Python >=3.9; 5.3 Requires-Python >=3.9; 5.3.1 Requires-Python >=3.9; 5.3.2 Requires-Python >=3.9; 5.3.3 Requires-Python >=3.9; 5.3.4 Requires-Python >=3.9; 5.3rc1 Requires-Python >=3.9; 6.0.0 Requires-Python >=3.9; 6.0.0rc1 Requires-Python >=3.9; 6.0.0rc2 Requires-Python >=3.9
ERROR: Could not find a version that satisfies the requirement matplotlib~=3.8.0 (from species) (from versions: 0.86, 0.86.1, 0.86.2, 0.91.0, 0.91.1, 1.0.1, 1.1.0, 1.1.1, 1.2.0, 1.2.1, 1.3.0, 1.3.1, 1.4.0, 1.4.1rc1, 1.4.1, 1.4.2, 1.4.3, 1.5.0, 1.5.1, 1.5.2, 1.5.3, 2.0.0b1, 2.0.0b2, 2.0.0b3, 2.0.0b4, 2.0.0rc1, 2.0.0rc2, 2.0.0, 2.0.1, 2.0.2, 2.1.0rc1, 2.1.0, 2.1.1, 2.1.2, 2.2.0rc1, 2.2.0, 2.2.2, 2.2.3, 2.2.4, 2.2.5, 3.0.0rc2, 3.0.0, 3.0.1, 3.0.2, 3.0.3, 3.1.0rc1, 3.1.0rc2, 3.1.0, 3.1.1, 3.1.2, 3.1.3, 3.2.0rc1, 3.2.0rc3, 3.2.0, 3.2.1, 3.2.2, 3.3.0rc1, 3.3.0, 3.3.1, 3.3.2, 3.3.3, 3.3.4, 3.4.0rc1, 3.4.0rc2, 3.4.0rc3, 3.4.0, 3.4.1, 3.4.2, 3.4.3, 3.5.0b1, 3.5.0rc1, 3.5.0, 3.5.1, 3.5.2, 3.5.3, 3.6.0rc1, 3.6.0rc2, 3.6.0, 3.6.1, 3.6.2, 3.6.3, 3.7.0rc1, 3.7.0, 3.7.1, 3.7.2, 3.7.3, 3.7.4, 3.7.5)
ERROR: No matching distribution found for matplotlib~=3.8.0

I have Python 3.8.10. [I get the same error on my computer, actually, but had already installed species before the requirements got increased (my computer got grandfathered in 😄).] What can I do?

Cannot get model for bt-settl

Hello Tomas

I do not know if it is a bug, or if I did a mistake, but I cannot load the bt-settl models (I did download them using database.add_model("bt-settl").

Here is what I type, and what I got:

species.SpeciesInit()
model="bt-nextgen"
teff=7000
read_model = species.ReadModel(model=model,wavel_range=(1., 5.))
model_param = {'teff':teff, 'logg':4.0, 'feh':0.0}
modelbox = read_model.get_model(model_param=model_param,spec_res=200.)

And here is what I got in return:

Initiating species v0.1.1... [DONE]
Database: /Users/slacour/PycharmProjects/ExoGravity_code/species_database.hdf5
Data folder: /Users/slacour/DATA/SPECIES
Working folder: /Users/slacour/PycharmProjects/ExoGravity_code
Traceback (most recent call last):
File "/Users/slacour/.conda/envs/ExoGravity_code/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3319, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "", line 6, in
modelbox = read_model.get_model(model_param=model_param,spec_res=200.)
File "/Users/slacour/.conda/envs/ExoGravity_code/lib/python3.7/site-packages/species/read/read_model.py", line 258, in get_model
flux = self.spectrum_interp(parameters)[0]
File "/Users/slacour/.conda/envs/ExoGravity_code/lib/python3.7/site-packages/scipy/interpolate/interpolate.py", line 2489, in call
indices, norm_distances, out_of_bounds = self._find_indices(xi.T)
File "/Users/slacour/.conda/envs/ExoGravity_code/lib/python3.7/site-packages/scipy/interpolate/interpolate.py", line 2536, in _find_indices
norm_distances.append((x - grid[i]) /
IndexError: index -2 is out of bounds for axis 0 with size 0

Running multinest or ultranest in parallel?

After getting multinest set up in parallel, I am trying actually to run it and am doing my best to follow the recommendation from the "Fitting data with a grid of model spectra" tutorial: "It is therefore recommended to first add all the required data to the database and then only run SpeciesInit, FitModel, and the sampler (run_multinest or run_ultranest) in parallel with MPI". However, running in parallel with N processors, I get N-1 times:

Traceback (most recent call last):
  File "[…]/Skript.py", line 297, in <module>
    fit = FitModel(object_name='meinPlanet', model='ames-cond', inc_spec=True)
  File "[…]/species/fit/fit_model.py", line 445, in __init__
    self.object = ReadObject(object_name)
  File "[…]/species/read/read_object.py", line 47, in __init__
    with h5py.File(self.database, "r") as h5_file:
  File "[…]/.local/lib/python3.9/site-packages/h5py/_hl/files.py", line 567, in __init__
    fid = make_fid(name, mode, userblock_size, fapl, fcpl, swmr=swmr)
  File "[…]/.local/lib/python3.9/site-packages/h5py/_hl/files.py", line 231, in make_fid
    fid = h5f.open(name, flags, fapl=fapl)
  File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
  File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
  File "h5py/h5f.pyx", line 106, in h5py.h5f.open
BlockingIOError: [Errno 11] Unable to open file (unable to lock file, errno = 11,
  error message = 'Resource temporarily unavailable')

Thus the line FitModel(object_name='meinPlanet', model='ames-cond', inc_spec=True) seems to be the problem. One processor out of the N writes Interpolating Spektrum... [DONE]. Then, the program stalls (unsurprisingly). Am I missing something simple? As far as I can tell, really only SpeciesInit() and FitModel() are called before the sampler. Thanks for any help!

sonora elf-owl as successor to bobcat and cholla

The sonora team released recently their elf-owl model grid to zenodo (T-dwarfs here, but there are also L-dwarf and Y-dwarf entries that are separate, maybe because of file size?). These are supposedly the successors to the bobcat and cholla grids, and vary all the interesting parameters (abundances and Kzz). It would be very useful to have them in species!

BEX Exoplanet Grids

Hi Tomas,

I love the package! Just like William, I would love for someone to have another model grid added in. In my case, the BEX models from Linder et al. 2019. I would try to hack them in myself, but sadly I don't think I'll have time, so I thought I'd drop it here to see if anyone else might!

Cheers,
Max

Retrieval with radial velocity / rotational broadening vsini

While trying to run a retrieval on an object, I saw that the vsini and rad_vel parameters (that I included in the bounds of my multinest retrieval dictionary ) are not passed to the algorithm.
While looking at the source code, it seems that these parameters are not passed to calc_spectrum_clouds.
Is it possible to add it?
Thanks !

wavelength / spectral spacing for exo-rem-highres grid

Hi Tomas,

I'm really interested in using the exo-rem highres grid for our AF Lep b analysis, but FitModel stalls at interpolating the grid on our SPHERE IFS spectrum because the wavelength of that spectrum is not uniform.

I get this warning, but have to manually kill the process.

Interpolating SPHERE...

/Users/wbalmer/species/species/util/spec_util.py:122: UserWarning: The wavelength spacing is not uniform (3.152470770057788e-05 +/- 6.685785633535913e-06). The smoothing with the Gaussian kernel requires either the spectral resolution or the wavelength spacing to be uniformly sampled.
warnings.warn(`

How would you recommend I proceed? Is there a way to resample the exo-rem-highres to a uniform spectral resolution? I tried

database.add_model(model=model_choice, teff_range=(700., 1300.), wavel_sampling=2000.)

but to no avail.

Using wavel_range with database.add_model()

Is there a change in the use of wavel_range when adding spectra to the database? I checked out the latest version of species and get:

In [6]: database.add_model(model='exo-rem-highres', teff_range=(1000, 1900), wavel_range=(1,5), unpack_tar=False)
…
Wavelength range (um) = 0.67 - 250.0
Sampling (lambda/d_lambda) = 20000
Teff range (K) = 1000 - 1900

Adding Exo-REM model spectra... exo-rem-highres_teff_1000_logg_3.0_feh_-0.5_co_0.10_spec.dat---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-6-ceb61b05de29> in <module>
----> 1 database.add_model(model='exo-rem-highres', teff_range=(1000, 1900), wavel_range=(1,5), unpack_tar=False)

[…]/species/data/database.py in add_model(self, model, wavel_range, wavel_sampling, teff_range, unpack_tar)
    716 
    717         with h5py.File(self.database, "a") as hdf5_file:
--> 718             add_model_grid(
    719                 model_tag=model,
    720                 input_path=self.data_folder,

[…]/species/data/model_data/model_spectra.py in add_model_grid(model_tag, input_path, database, wavel_range, teff_range, wavel_sampling, unpack_tar)
    336 
    337                 else:
--> 338                     flux_resample = spectres.spectres(
    339                         wavelength,
    340                         data_wavel,

~/.local/lib/python3.10/site-packages/spectres/spectral_resampling.py in spectres(new_wavs, spec_wavs, spec_fluxes, spec_errs, fill, verbose)
     75 
     76     old_edges, old_widths = make_bins(old_wavs)
---> 77     new_edges, new_widths = make_bins(new_wavs)
     78 
     79     # Generate output arrays to be populated

~/.local/lib/python3.10/site-packages/spectres/spectral_resampling.py in make_bins(wavs)
      7     """ Given a series of wavelength points, find the edges and widths
      8     of corresponding wavelength bins. """
----> 9     edges = np.zeros(wavs.shape[0]+1)
     10     widths = np.zeros(wavs.shape[0])
     11     edges[0] = wavs[0] - (wavs[1] - wavs[0])/2

AttributeError: 'NoneType' object has no attribute 'shape'

Removing wavel_range=(1,5) "works" but then the computer nearly froze while it was doing Adding Exo-REM model spectra... exo-rem-highres_teff_… I had to suspend ipython3 and do kill %%.

Thank you! Sorry if I am missing something simple.

Warning: 'PyMultiNest could not be imported.'

I am getting this warning whenever I import species. The details of warnings it shows are below:

ERROR:   Could not load MultiNest library "libmultinest.so"
ERROR:   You have to build it first,
ERROR:   and point the LD_LIBRARY_PATH environment variable to it!
ERROR:   manual: http://johannesbuchner.github.com/PyMultiNest/install.html


ERROR:   Could not load MultiNest library: libmultinest.so
ERROR:   You have to build MultiNest,
ERROR:   and point the LD_LIBRARY_PATH environment variable to it!
ERROR:   manual: http://johannesbuchner.github.com/PyMultiNest/install.html

problem: libmultinest.so: cannot open shared object file: No such file or directory
/work/LAS/kerton-lab/suvadip/py3/lib/python3.8/site-packages/species/analysis/fit_model.py:26: UserWarning: PyMultiNest could not be imported.
  warnings.warn('PyMultiNest could not be imported.')

I checked and found pymultinest is already installed. Any help would be great! Thanks!

Tutorial "Fitting data with a grid of model spectra" (and a few other pages): small things

Following your request, I am submitting an "Issue" for several small things.

Not-major, not-minor things

  1. Unless I am mistaken, in https://species.readthedocs.io/en/latest/tutorials/fitting_model_spectra.html, there is missing at the top the line:
from species.plot.plot_spectrum import plot_spectrum

As is, ipython was not finding plot_spectrum() under "Plotting the spectral energy distribution"

  1. How does plot_spectrum know to evaluate the model at each sample in samples? modelbox is set only as = read_model.get_model(model_param=best, …, i.e. only with best. How does is a spec_res picked for the samples? This could be clarified.

Minor things

  1. At the top, about LD_LIBRARY_PATH: it might be cleaner (and/or less clean but easier for casual users) to set it in .bashrc or its Mac equivalent, and you might want to comment that the variable is with DYLD… for Macs but LD… under linux.

  2. "… be found in the API documentation of FitModel."
    The link gives Error 404.

  3. "run_multinest) and run_ultranest)"
    There are extra parentheses.

  4. The output of code block [8] has "Object: Unknown", which looks like a contradiction:

Adding object: beta Pic b
    - GRAVITY spectrum:
       - Object: Unknown
…
  1. "we will use the run_multinest method since UltraNest is somewhat more straightforward to install than MultiNest"
    It sounds like you meant "even though MultiNest is less straightforward to install"…

  2. Under "Plotting-the-posterior-samples":
    "Here we specify the database tag with the results from run_ultranest"
    You must mean multinest and you could comment that the tag can be a different name for each run (varying parameters or with ultranest or multinest)

  3. Is there an easy way (without digging into Matplotlib…) of not plotting the parallax row and column in the corner plot? It can be good to check but since it is an input, one probably often wants to skip it.

  4. In "Next, we calculate the residuals of the best-fit model with get_residuals", the link "get_residuals" has the wrong anchor as target:

#species.util.phot_util.get_residuals
→
#species.util.fit_util.get_residuals
  1. On that page (https://species.readthedocs.io/en/latest/species.util.html#species.util.phot_util.get_residuals), by the way, there is an extra `:
"Only required with spectrum='petitradtrans'`. Make sure that"
→
Only required with spectrum='petitradtrans'. Make sure that
  1. Back to the tutorial::

Now that we have gather all

Now that we have gathered all

  1. Suggestion: Add by default minor ticks and/or 1sigma and/or 3sigma lines in the residuals panel of the plotted spectrum. Especially when |ylim_res| is large (>3 sigma) is this useful: whether the points somewhere in the middle of the xrange have residuals of ~1 or ~5 sigma is somewhat important… I have seen now a few papers where this is tough to see.

  2. At https://species.readthedocs.io/en/latest/tutorials/emission_line.html:
    "subtract_continuum method of EmissionLine" contains two links that both give an Error 404. Maybe there is a function to detect links that are not valid anymore and/or to update links in the documentation and tutorial when the source code changes…

  3. At https://species.readthedocs.io/en/latest/overview.html,
    "Spectra of directly imaged planets and brown dwarfs" points to https://github.com/tomasstolker/species/blob/main/species/data/companion_data.json also gives an Error 404

  4. Same on another page (sorry, I do not know which one), with the text (I copied it but not the URL):
    "A library of magnitudes and parallaxes of directly imaged companions are available in the file."
    This contains a link giving an Error 404.

  5. From species/data/filter_data/filter_data.py:227:

UserWarning: The minimum transmission value of Subaru/CIAO.z is smaller than zero (-1.80e-03).

This is from the SVO but maybe you could fix it and/or ask them to?

  1. Overigens: #70 is gesloten, maar is niet als zodanig gemarkeerd.

Thanks again for really a great tutorial! These are just small notes I made on the way.

Enhancing plot_spectrum a bit

A few suggested improvements for plot_spectrum:

  1. The disc parameters are not included in the legend (when no label is specified), which is "dangerous" because there is no reminder on the plot that such a component was included. I was going to suggest extending leg_param to include all parameters in the ModelBox but it gives UserWarning: The 'disk_teff' parameter in 'leg_param' is not a parameter of 'atmo-neq-weak' so it will not be included with the legend. and the same for disk_radius even though ModelBox.open_box() listed them.

  2. Can one have the data points underneath (underplot) the model points? Changing the order of the boxes passed to plot_spectrum() surprisingly does not have this effect.

  3. Would it be easy to have a "staircase" (with steps centered at the x values) for the theoretical spectrum? This would be good when looking at only small spectral regions, as a clear reminder of the finite resolution.

  4. It would be nice to be able to underplot only the blackbody (or, in general, "the other model" when using two components), to see more clearly its contribution, and maybe and/or the pure-atmospheric model. If not much more work to program: have for each contribution the option of underplotting it?

In any case, the routine is already very nice!

BTSettl-CIFIST models error with read_model.py and scipy interpnd

I am trying to use the BTSettl-CIFIST models because they go up to the higher temperatures I need for the stars, and I am encountering an error that seems to be deep within the API that I am not able to solve. It seems that when read_model.py interfaces with scipy interpolate.interpnd, there is an error on reshaping an array. This error has not been a problem with any other models I have tried.
I installed species yesterday, version 0.5.3

I am following the "synthetic photometry" tutorial:

import species
species.SpeciesInit()
database = species.Database()
database.add_model(model='bt-settl-cifist', teff_range=(2000., 7000.))
database.add_filter('SLOAN/SDSS.g')
database.add_filter('SLOAN/SDSS.r')
database.add_filter('SLOAN/SDSS.i')
database.add_filter('SLOAN/SDSS.z')
model_param = {'teff':5400, 'logg':4.0, 'feh':0.5, 'distance': Adistance[0], 'radius':1000}
model_box = read_model.get_model(model_param=model_param, spec_res=200., smooth=True)

Produces the error:
`---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Input In [10], in <cell line: 1>()
----> 1 model_box = read_model.get_model(model_param=model_param, spec_res=200., smooth=True)

File ~/anaconda3/envs/py39/lib/python3.9/site-packages/typeguard/init.py:1033, in typechecked..wrapper(*args, **kwargs)
1031 memo = _CallMemo(python_func, _localns, args=args, kwargs=kwargs)
1032 check_argument_types(memo)
-> 1033 retval = func(*args, **kwargs)
1034 try:
1035 check_return_type(retval, memo)

File ~/anaconda3/envs/py39/lib/python3.9/site-packages/species/read/read_model.py:747, in ReadModel.get_model(self, model_param, spec_res, wavel_resample, magnitude, smooth)
743 parameters.append(model_param["log_kzz"])
745 # Interpolate the spectrum from the grid
--> 747 flux = self.spectrum_interp(parameters)[0]
749 # Add the radius to the parameter dictionary if the mass if given
751 if "mass" in model_param and "radius" not in model_param:

File ~/anaconda3/envs/py39/lib/python3.9/site-packages/scipy/interpolate/_interpolate.py:2515, in RegularGridInterpolator.call(self, xi, method)
2512 raise ValueError("Method '%s' is not defined" % method)
2514 ndim = len(self.grid)
-> 2515 xi = _ndim_coords_from_arrays(xi, ndim=ndim)
2516 if xi.shape[-1] != len(self.grid):
2517 raise ValueError("The requested sample points xi have dimension "
2518 "%d, but this RegularGridInterpolator has "
2519 "dimension %d" % (xi.shape[1], ndim))

File interpnd.pyx:157, in scipy.interpolate.interpnd._ndim_coords_from_arrays()

File interpnd.pyx:182, in scipy.interpolate.interpnd._ndim_coords_from_arrays()

ValueError: cannot reshape array of size 3 into shape (2)`

Various small things

  1. At start-up, doing SpeciesInit() leads to:
#UserWarning: no type annotations present -- not typechecking species.util.dust_util.check_dust_database
  1. Would it be good to convert silently resolution in database.add_object() to a float? Currently:
type of argument "spec_res" must be one of (float, NoneType); got numpy.int64 instead

Keeping resolution an integer makes it easy to combine it with filenames, for instance, and one often thinks of integer resolution.

  1. The documentation says about add_object(): Function for adding the photometry and/or spectra of an object to the database and the tutorial writes: "The use of add_object is incremental so when rerunning the method it will add new data in case a filter or spectrum name hadn’t been used before, or it will overwrite existing data in case a filter or spectrum name does already exist" but actually it replaces the data when adding spectral data in two different calls:
database.add_object(object_name = 'myPMO', parallax=(100,0.1), spectrum = {'onepart': ('file1.dat', None, 1000)})
database.add_object(object_name = 'myPMO', parallax=(100,0.1), spectrum = {'anotherpart': ('file2.dat', None, 1000)})

After the second call, database.get_object('myPMO').open_box() will only list the 'anotherpart' spectral data. If the parallax is not set in the second call, it does keep it from the first (as should be).

Honestly, I have found it up to now practical to have the data overwritten and then in FitModel not have to worry about (because I forgot to…) including only the datasets I want, but at other times it can lead to unpleasant surprises. Maybe provide an overwritedata= (by default False) keyword?

  1. Two tiny things in species/util/box_util.py:
    " but the argument of 'model' is set to "

    "but the argument of 'model' is set to "
    🤓

"is differently implemented with"

"is implemented differently with"

  1. When calling database.list_content(), it would be nice to print the stored resolution of the spectral data.

  2. Similarly, it would be useful to output the (lowest, average, highest) resolution in the model when doing modelbox_full.open_box()

  3. In ./species/plot/plot_mcmc.py:

print(f"Plotting the posterior: {output}...", end="", flush=True)
→
print(f"Plotting the posterior: {output} ...", end="", flush=True)

This would make copying the filename from the terminal easier: without a space, the three dots get selected when double clicking on the name :D.

  1. In corner plot, for the column title at least for ATMO:
    ad_index = {}

    $\gamma = {}$

  2. Put the (e.g.) "1e-16" as "10^{-16}" into the parenthesis with the units of flux density (classical way) instead of on top of the plot (defensible but less elegant and takes up one whole line for a number that is not of detailed interest)

  3. When in plot_spectrum units gets implemented, or until then as an option, to have erg/cm²/s/Å units for the flux density (could be called "cgs-like" or "Swedish-style" (svensk=True?) because of Å 😉)

Thanks!

spectral_type

I dont get any meaningful output from the function CompareSpectra.spectral_type. The issue is apparently solved by transposing the 'spectrum' array in line 146 in empirical.py.

Fitting with nested sampling (UltraNest or MultiNest)

  1. In the documentation of fit.run_ultranest, about prior:
The parameter is not used if set to ``None``.
→
A flat prior betwen XX and YY (log? lin?) is used

or something like this to make it clearer.

  1. Is it possible to terminate the run early without losing all the data? It seems that Ctrl+C terminates the run but also corrupts the database because it remains open and the user cannot close it by hand. For MultiNest, Ctrl+C (most likely?) does not stop the run (as mentioned in #76). It would be nice to have for both UltraNest and MultiNest the possibility of graciously stopping without corrupting the database (and as a bonus, maybe even saving the points up to now, but this is really extra). I made a mistake starting a run with too many live points and could not correct this, only wait for it to finish…

  2. If 2. is possible, while the run is going, is there an easy way of plotting (in physical-parameter space) the distribution of points found up to now, if this is meaningful to estimate whether the run is actually finished or still needs a long time? Especially when testing, this would be practical. (As you probably know, "UltraNest has a visualisation to observe the current live points" but I am guessing this is deep in the package. For MultiNest, I cannot make a sense of how close it is to be finished, and just looking by hand at the distribution of points while the simulation is running should give a good idea.)

  3. I got the warning

UserWarning: Sampling from region seems inefficient (0/40 accepted in iteration 2500). To improve efficiency, modify the transformation so that the current live points (stored for you in […]/run1/extra/sampling-stuck-it%d.csv) are ellipsoidal, or use a stepsampler, or set frac_remain to a lower number (e.g., 0.5) to terminate earlier.

How can I "use a stepsampler" or "set frac_remain"? Or are these just symptons and for typical fitting situations, it means I am doing something wrong at a deeper level?

  1. Hoping to have a quick, rough run, I tried min_num_live_points=100 in fit.run_ultranest() but after "the main part" came the message:
[ultranest] Evidency uncertainty strategy wants 199 minimum live points (dlogz from 0.50 to 1.18, need <0.5)

and it wrote something about increasing the number of live points and it started runnning again… So in the end it took what felt like a similar amount of time as with 500 live points. Is there then (typically/often) no advantage to reducing the number of live points? And more generally, is it possible to do somehow a "quick run" yielding only an approximate best fit and errorbars, to see whether the model can match at all?

  1. I get the following warnings (pointing them out here in case they are easy to fix):
[…]/species/fit/fit_model.py:1848: InstrumentationWarning: instrumentor did not find the target function -- not typechecking species.fit.fit_model.FitModel.run_multinest.<locals>.lnprior_multinest

[…]/species/fit/fit_model.py:1887: InstrumentationWarning: instrumentor did not find the target function -- not typechecking species.fit.fit_model.FitModel.run_multinest.<locals>.lnlike_multinest
  1. Currently, only part of the priors, namely the ones set as hard boundaries (through FitModel/bounds=), are printed to screen ("Prior boundaries"), but neither those nor the ones set through fit.run_*nest/prior={…} [I see now it is deprecated, but still possible] are stored in the run*/ folder nor the database. Also the parallax is a prior, set through database.add_object(), but that is not indicated as such nor stored either. For completeness, I guess that should be listed.

  2. In FitModel, I accidentally set a key in bounds that is not one of the axes of atmospheric model, which lead to way too many:

KeyError: 'feh'
Exception ignored on calling ctypes callback function: <function run.<locals>.loglike at 0x7fa2440c0f70>
Traceback (most recent call last):
  File "[…]/.local/lib/python3.9/site-packages/pymultinest/run.py", line 228, in loglike
    return LogLikelihood(cube, ndim, nparams)
  File "[…]/species/fit/fit_model.py", line 1906, in lnlike_multinest
    mpi_rank = MPI.COMM_WORLD.Get_rank()
  File "[…]/species/fit/fit_model.py", line 1161, in lnlike_func

One warning would have been enough :). Or even better: silently ignore the extraneous parameter (or just print one warning)? This way, when trying different atmospheric models, one could always pass all priors as one variable and thus avoid acrobatics to set only a part of them. It also lead to (or maybe it was a coincidence):

~$ head multinest/.txt 
0.200000000000000048E-02   -0.139057853353940903-308    0.101724320650100708E+04    0.413902401924133301E+01    0.246439111232757568E+01    0.199086349010467556E+01    0.156033434761241327E+02

Note the missing E in -0.139057853353940903-308.

Thanks again!

Database problem: OSError: Unable to open file (file is already open for read-only)

I added an object

from species.data.database import Database
database = Database()
database.add_object('Objekt1', parallax = …  # set properties
    )

but then mistyped when looking for it later: database.get_object('Abject1'), leading to

KeyError: "Unable to open object (object 'Abject1' doesn't exist)"

Afterwards, I try to delete 'Objekt1' through database.delete_data('objects/Objekt1') but get an HDF5 error:

h5py/_objects.pyx in h5py._objects.with_phil.wrapper()

h5py/_objects.pyx in h5py._objects.with_phil.wrapper()

h5py/h5f.pyx in h5py.h5f.open()

OSError: Unable to open file (file is already open for read-only)

How can I recover from here without quitting the session and restarting? And would it be possible to catch these errors and close the file in the background?

Number of steps cut with fixed parameters

In plot_posterior, when checking the fixed parameter, only axis 0 has :, so axes 1 and 2 both get filtered by the index_sel mask. This results in each fitted parameter having only 4 steps per walker in the corner plot (the shape before the index selection is (nwalkers, nsteps-burnin, nparams) and it goes to (nwalkers, nparams, nparams) after). I fixed this in a fork of the project and I can create a PR (the only modification is shown in the code block below). I checked with a few numpy versions and it does not seem related to a numpy version problem.

    # Check if parameter values were fixed

    index_sel = []
    index_del = []

    # Use only last axis for parameter dimensions
    for i in range(ndim):
        if np.amin(samples[:, :, i]) == np.amax(samples[:, :, i]):
            index_del.append(i)
        else:
            index_sel.append(i)

    samples = samples[:, :, index_sel]

Adding dynesty support (especially to retrievals)

Hi Tomas, I just wanted to open this issue to keep track of our discussion about dynesty support. This isn't very urgent at all, but it would go a long way to help me be more productive with the retrievals I plan to do in the near future (for reasons I describe below).
I hope to start with adding dynesty support for the retrievals and then we can likely take the lessons learned there and add support for the forward modeling (I think that is less important since the nuances of nested sampling aren't really a potential limiting factor in doing the grid interpolation/model comparison).

Some hand wavy reasons for implementing dynesty:

  • options to do both static nested sampling (effectively reproduces MultiNest) and dynamic nested sampling (effectively reproduces UltraNest).
  • options to do importance nested sampling
  • the two above give more flexibility in allowing the user to choose how they'd like to estimate their posterior and to what fidelity
  • actively developed (so easier to bugfix and maintain as python versions change)
  • better documented
  • better multiprocessing support: will enable multiprocessing on a single machine without using mpi, and will enable using mpi across multiple nodes. for instance, at my facility there is only 48 cores per node, and so with mpi + multinest I can only use that many cores. but with schwimmbad and dynesty, i could allocate cores across multiple nodes that follows a single master process on one node (I think?).

The issue with a clean port, e.g. adding a run_dynesty function that is effectively the same as run_multinest except it passes the loglike and prior functions to the dynesty interface right now is that within species.fit.retrieval.run_multinest the loglike_func and prior_func are local objects, functions defined inside run_multinest and not class attributes or global objects. In this way, when attempting to set a pool with dynesty.pool.Pool(npool, loglike_func, prior_func) as pool:, you get an error like

AttributeError: Can't pickle local object 'AtmosphericRetrieval.run_dynesty.<locals>.loglike_func'

and when trying to initialize dynesty on one core dsampler = dynesty.NestedSampler(loglike_func, prior_func,len(self.parameters)), you get an error like

TypeError: AtmosphericRetrieval.run_dynesty.<locals>.prior_func() missing 2 required positional arguments: 'n_dim' and 'n_param'

I've tried to hack something together to show these errors on a dummy branch of my fork, dynesty_branch_2.

I've pseudo-coded / parallel implemented a bigger overhaul in a separate file species.fit.dretrieval.py on a branch of my fork dynesty-retrieval which is my first attempt to solve this by making loglike_func and prior_func attributes of the AtmosphericRetrieval class so that they can be interpreted by dynesty. This of course involves making the variables defined within run_multinest that loglike and prior rely on into class attributes as well, which became something of a mess - I don't intend that code to be the actual strategy of implementation, more like its a sketch book 😅 .

If there's a different way to do this that I've overlooked, however, that doesn't involve such a big change to the structure of the AtmosphericRetrieval class, I'd prefer that. Hence why I've done most of this in pseudo code and in a separate file to start tinkering with. I was able to run a full retrieval on some test data using multiprocessing locally using dynesty by calling AtmosphericRetrieval with dretrieval.py, but I haven't been able to get a mpi implementation to work.

I'm also trying to think through if there is any memory saving way to set up the retrieval across multiple cores, that doesn't involve having to load the opacity tables for each core..., that has been an issue on my cluster. That is definitely out of the scope of this issue but may be possible in the future with the schwimmbad/dynesty architecture this issue could help set up? I am thinking of the main/worker structure (like the mpi-demo.py in schwimmbad) that loads the AtmosphericRetrieval and RadTrans object once, but is able to distribute the calc_flux to workers in the pool. There is probably a reason I am not thinking of that this is not feasible.

Error in loglikelihood function: returned non-finite number

Hi - I'm attempting to fit a GRAVITY spectrum (with covariance matrix) using ultranest and keep running into this error. It appears related to this ultranest error (JohannesBuchner/UltraNest#5). My code is as follows:

% add in object
database.add_object('SECRET GRAVITY OBSERVATION',
distance=distance,
app_mag=None,
spectrum={'GRAVITY':(file, file, 500.)},
)
% add in model and init FitModel object
model_choice = 'drift-phoenix'
fit = species.FitModel(object_name='SECRET GRAVITY OBSERVATION,
model=model_choice,
bounds={'teff': (300., 1900.)},
inc_phot=False,
inc_spec=True,
fit_corr=['GRAVITY'],
weights=None)

fit.run_ultranest(tag='SECRET',
min_num_live_points=500,
output=figpref+'ultranest',
prior={'mass': (11., 1.)}
)

Running nested sampling with UltraNest...

/Users/wbalmer/ultranest/ultranest/store.py:195: DeprecationWarning: np.float is a deprecated alias for the builtin float. To silence this warning, use float by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use np.float64 here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
'points', dtype=np.float,


AssertionError Traceback (most recent call last)
/var/folders/0q/4ls9h2fn4lv90qs43vdtzvkw0002tl/T/ipykernel_94525/3740028938.py in
----> 1 fit.run_ultranest(tag='SECRET',
2 min_num_live_points=t00,
3 output=figpref+'ultranest',
4 prior={'mass': (11., 1.)}
5 )

/opt/anaconda3/envs/exog/lib/python3.9/site-packages/typeguard/init.py in wrapper(*args, **kwargs)
925 memo = _CallMemo(python_func, _localns, args=args, kwargs=kwargs)
926 check_argument_types(memo)
--> 927 retval = func(*args, **kwargs)
928 try:
929 check_return_type(retval, memo)

~/species/species/analysis/fit_model.py in run_ultranest(self, tag, min_num_live_points, output, prior)
1785 return self.lnlike_func(params, prior=prior)
1786
-> 1787 sampler = ultranest.ReactiveNestedSampler(self.modelpar,
1788 lnlike_ultranest,
1789 transform=lnprior_ultranest,

~/ultranest/ultranest/integrator.py in init(self, param_names, loglike, transform, derived_param_names, wrapped_params, resume, run_num, log_dir, num_test_samples, draw_multiple, num_bootstraps, vectorized, ndraw_min, ndraw_max, storage_backend, warmstart_max_tau)
1093 self.ndraw_max = ndraw_max
1094 self.build_tregion = transform is not None
-> 1095 if not self._check_likelihood_function(transform, loglike, num_test_samples):
1096 assert self.log_to_disk
1097 if resume_similar and self.log_to_disk:

~/ultranest/ultranest/integrator.py in _check_likelihood_function(self, transform, loglike, num_test_samples)
1160 assert np.shape(logl) == (num_test_samples,), (
1161 "Error in loglikelihood function: returned shape is %s, expected %s" % (np.shape(logl), (num_test_samples,)))
-> 1162 assert np.isfinite(logl).all(), (
1163 "Error in loglikelihood function: returned non-finite number: %s for input u=%s p=%s" % (logl, u, p))
1164

AssertionError: Error in loglikelihood function: returned non-finite number: [-69595.69916114 nan] for input u=[[0.88514244 0.80027242 0.94738644 0.90123026 0.50204652 0.04512219]
[0.1820225 0.83629702 0.48314143 0.17793817 0.37901221 0.37225628]] p=[[ 1.71622790e+03 5.00068104e+00 2.52647792e-01 1.43086118e+00
-1.49386045e+00 4.51221917e-02]
[ 5.91236000e+02 5.09074255e+00 -1.65172711e-01 9.24556720e-01
-1.86296336e+00 3.72256282e-01]]

Wondering what might be going wrong here and if there's a potential solution. Appears to happen for at least exo-rem and drift-phoenix models, I haven't tested others, and also appears to happen whether or not I include "fit_corr=['GRAVITY']" or not. Wondering if its a problem with the spectrum itself?

Running example fails with UnboundLocalError: local variable 'integrand1' referenced before assignment ...

Simply following the initial steps on https://species.readthedocs.io/en/latest/tutorials/running_species.html after a fresh pip install. When doing the

database.add_companion('51 Eri b')

The result is the following error...:

>>> database.add_companion('51 Eri b')
Downloading Vega spectrum (270 kB)... [DONE]
Adding Vega spectrum... [DONE]
Adding filter: MKO/NSFCam.J... [DONE]
Adding filter: MKO/NSFCam.H... [DONE]
Adding filter: MKO/NSFCam.K... [DONE]
Adding filter: Paranal/SPHERE.IRDIS_B_H... [DONE]
Adding filter: Paranal/SPHERE.IRDIS_D_H23_2... [DONE]
Adding filter: Paranal/SPHERE.IRDIS_D_K12_1... [DONE]
Adding filter: Keck/NIRC2.Lp... [DONE]
Adding filter: Keck/NIRC2.Ms... [DONE]
Adding object: 51 Eri b
   - Distance (pc) = 29.78 +/- 0.12
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/feldt/miniconda3/envs/petitRADTRANS/lib/python3.9/site-packages/typeguard/__init__.py", line 891, in wrapper
    retval = func(*args, **kwargs)
  File "/home/feldt/miniconda3/envs/petitRADTRANS/lib/python3.9/site-packages/species/data/database.py", line 171, in add_companion
    self.add_object(object_name=item,
  File "/home/feldt/miniconda3/envs/petitRADTRANS/lib/python3.9/site-packages/typeguard/__init__.py", line 891, in wrapper
    retval = func(*args, **kwargs)
  File "/home/feldt/miniconda3/envs/petitRADTRANS/lib/python3.9/site-packages/species/data/database.py", line 657, in add_object
    flux[mag_item], error[mag_item] = synphot.magnitude_to_flux(
  File "/home/feldt/miniconda3/envs/petitRADTRANS/lib/python3.9/site-packages/typeguard/__init__.py", line 891, in wrapper
    retval = func(*args, **kwargs)
  File "/home/feldt/miniconda3/envs/petitRADTRANS/lib/python3.9/site-packages/species/analysis/photometry.py", line 344, in magnitude_to_flux
    zp_flux = self.zero_point()
  File "/home/feldt/miniconda3/envs/petitRADTRANS/lib/python3.9/site-packages/typeguard/__init__.py", line 891, in wrapper
    retval = func(*args, **kwargs)
  File "/home/feldt/miniconda3/envs/petitRADTRANS/lib/python3.9/site-packages/species/analysis/photometry.py", line 101, in zero_point
    return self.spectrum_to_flux(wavelength_crop, flux_crop)[0]
  File "/home/feldt/miniconda3/envs/petitRADTRANS/lib/python3.9/site-packages/typeguard/__init__.py", line 891, in wrapper
    retval = func(*args, **kwargs)
  File "/home/feldt/miniconda3/envs/petitRADTRANS/lib/python3.9/site-packages/species/analysis/photometry.py", line 208, in spectrum_to_flux
    integral1 = np.trapz(integrand1, wavelength[indices])
UnboundLocalError: local variable 'integrand1' referenced before assignment

Looks like no detector type gets ever assigned...

Error during installation

I am getting several warnings and errors during species installation. I am installing it on windows 10 and python 3.9.6. What I followed is below:

  1. pip install --upgrade pip
  2. pip install cython
  3. pip install species

Part of error I am getting is:

error: Could not find module 'hdf5.dll' (or one of its dependencies). Try using the full path with constructor syntax. Loading library to get version: hdf5.dll

ERROR: Command errored out with exit status 1: 'c:\python\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.arg v[0] = '"'"'C:\\Users\\suvad\\AppData\\Local\\Temp\\pip-install-kb2icwhf\\h5py_2e14276aa53349df926b77e05bacce1f\\setup.py'"'"' ; __file__='"'"'C:\\Users\\suvad\\AppData\\Local\\Temp\\pip-install-kb2icwhf\\h5py_2e14276aa53349df926b77e05bacce1f\\setup.py' "'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools i mport setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'e xec'"'"'))' install --record 'C:\Users\suvad\AppData\Local\Temp\pip-record-60yn7xrd\install-record.txt' --single-version-exter nally-managed --compile --install-headers 'c:\python\Include\h5py' Check the logs for full command output.

Any help would be great!

interp2d deprecation in SciPy 1.10 leads to too many warnings printed during sampling

interp2d deprecation in SciPy 1.10 leads to too many warnings printed during sampling when using the lognorm extinction priors in FitModel, resulting in a crash when using a Jupyter Notebook. source is dust_utils.interp_lognorm, line 417. my quick fix was just to import warnings and suppress warning output in the dust_utils.py directly, but according to this transition guide, another function should be implemented.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.