nu-radio / nuradiomc Goto Github PK
View Code? Open in Web Editor NEWA Monte Carlo simulation package for radio neutrino detectors
License: GNU General Public License v3.0
A Monte Carlo simulation package for radio neutrino detectors
License: GNU General Public License v3.0
JSON is not ideal for a detector description since it doesn't allow comments. We are using YAML for the config files, which allows comments.
It may be worthwhile moving to one type in the future. However, TinyDB is only compatible with JSON, so in order to keep both the database support and a human-readable format, a different solution is needed to change this.
Not a high priority problem, but should be kept in mind, should changing from TinyDB at some point or if someone has a good idea.
Many of the tests in the SignalGen class rely on calls like:
import NuRadioMC.SignalGen.parametrizations as param
spec4 = param.get_frequency_spectrum(E, theta, ff, em, n_index, R, model='ZHS1992')
But the get_frequency_spectrum
method was removed, so these no longer work.
Solution: we need to restructure the code so that each shower in the ice is treated in the same way for all stations, channels and ray tracing solutions. Currently, only different ray tracing solutions have the same shower.
Volume extension by the 95% quantile of the tau-length seems in my opinion to be wrongly implemented:
NuRadioMC/NuRadioMC/EvtGen/generator.py
Line 438 in 8d335c0
NuRadioMC/NuRadioMC/EvtGen/generator.py
Line 497 in 8d335c0
If I choose a zenith range from 0 to pi, the max_horizontal_distance = np.abs(max(0, 0)*fiducial_zmin = 0
as a consequence the horizontal distance is not extended.
What is intended is to have a max_in_thetarange, i.e. some sampling like
dvals = np.linspace(attributes['thetamin'], attributes['thetamax'], 1000)
max_horizontal_dist = np.abs(max(np.abs(np.tan(dvals)))*volume['fiducial_zmin'])
would be a dirty fix.. any better ideas?
Some changes in NuRadioMC require changes in NuRadioReco, e.g. saving of additional parameters in the parameter storage. How do we synchronize these changes?
Can we automatically check for a NuRadioReco version? But right now we don't really habe a versioning.
Can we read out the git commit status? And give a warning or stop if NuRadioReco is not at least at a certain commit level?
Write modules that take care of the air shower signal propagation in ice.
Major analysis item. Probably needs to be broken down into smaller issues at some point.
Attenuation length is hardcoded in C++ ray-tracer. It should be modularized into utility attenuation.h with a python wrapper. This was started but never finished/
Currently the user needs perform the efield to voltage conversion, downsampling and noise adding. These steps are so general that we could also perform them always in the main simulation.py script.
This would have the advantage that we can code noise adding correctly, including adjusting the normalization correctly.
The downside would be more advanced detector simulations. E.g. for the deep learning data set we first simulate a trigger on the signal only to see where the signal would trigger. Then we add noise and resimulate the trigger. Something like this would still be possible by setting the 'add noise' config to false, but then add noise in the detector description anyway.
In the future we might want to add noise even before applying the antenna response. This would be precluded from such a change.
My current feeling is to rather leave it as it is.
Title says it all.
It looks like the output giving the progress for the simulation is wrong by roughly a factor 1/2. It finishes when its progress is just at 50-60%:
`
WARNING:SignalGen.ARZ:setting seed to 1235
WARNING:SignalGen.ARZ:loading shower library (/home/christoph/Software/NuRadioMC/NuRadioMC/SignalGen/ARZ/shower_library/library_v1.2.pkl) into memory
WARNING:NuRadioReco.antennapattern:loading antenna file vpol_4inch_center took 0 seconds
STATUS:NuRadioMC:processing event group 212/2000 and shower 137/2459 (1 showers triggered) = 5.6%, ETA 17m4s, time consumption: ray tracing = 16% (att. length 73%), askaryan = 81%, detector simulation = 3% reading input = 0%, calculating weights = 0%, distance cut 0%, unaccounted = 0%
STATUS:NuRadioMC:processing event group 319/2000 and shower 217/2459 (1 showers triggered) = 8.8%, ETA 21m9s, time consumption: ray tracing = 12% (att. length 74%), askaryan = 85%, detector simulation = 3% reading input = 0%, calculating weights = 0%, distance cut 0%, unaccounted = 0%
STATUS:NuRadioMC:processing event group 376/2000 and shower 267/2459 (1 showers triggered) = 10.9%, ETA 25m20s, time consumption: ray tracing = 11% (att. length 75%), askaryan = 85%, detector simulation = 4% reading input = 0%, calculating weights = 0%, distance cut 0%, unaccounted = 0%
STATUS:NuRadioMC:processing event group 459/2000 and shower 328/2459 (1 showers triggered) = 13.3%, ETA 27m0s, time consumption: ray tracing = 10% (att. length 76%), askaryan = 86%, detector simulation = 4% reading input = 0%, calculating weights = 0%, distance cut 0%, unaccounted = 0%
STATUS:NuRadioMC:processing event group 556/2000 and shower 408/2459 (6 showers triggered) = 16.6%, ETA 25m58s, time consumption: ray tracing = 9% (att. length 75%), askaryan = 86%, detector simulation = 5% reading input = 0%, calculating weights = 0%, distance cut 0%, unaccounted = 0%
STATUS:NuRadioMC:processing event group 665/2000 and shower 494/2459 (6 showers triggered) = 20.1%, ETA 24m47s, time consumption: ray tracing = 9% (att. length 75%), askaryan = 86%, detector simulation = 4% reading input = 0%, calculating weights = 0%, distance cut 0%, unaccounted = 0%
STATUS:NuRadioMC:processing event group 742/2000 and shower 558/2459 (6 showers triggered) = 22.7%, ETA 24m39s, time consumption: ray tracing = 9% (att. length 76%), askaryan = 87%, detector simulation = 4% reading input = 0%, calculating weights = 0%, distance cut 0%, unaccounted = 0%
STATUS:NuRadioMC:processing event group 835/2000 and shower 624/2459 (7 showers triggered) = 25.4%, ETA 24m17s, time consumption: ray tracing = 9% (att. length 76%), askaryan = 86%, detector simulation = 4% reading input = 0%, calculating weights = 0%, distance cut 0%, unaccounted = 0%
STATUS:NuRadioMC:processing event group 1016/2000 and shower 753/2459 (8 showers triggered) = 30.6%, ETA 20m59s, time consumption: ray tracing = 9% (att. length 76%), askaryan = 86%, detector simulation = 5% reading input = 0%, calculating weights = 0%, distance cut 0%, unaccounted = 0%
STATUS:NuRadioMC:processing event group 1139/2000 and shower 854/2459 (9 showers triggered) = 34.7%, ETA 19m22s, time consumption: ray tracing = 9% (att. length 76%), askaryan = 87%, detector simulation = 4% reading input = 0%, calculating weights = 0%, distance cut 0%, unaccounted = 0%
STATUS:NuRadioMC:processing event group 1271/2000 and shower 948/2459 (10 showers triggered) = 38.6%, ETA 18m8s, time consumption: ray tracing = 9% (att. length 76%), askaryan = 87%, detector simulation = 5% reading input = 0%, calculating weights = 0%, distance cut 0%, unaccounted = 0%
STATUS:NuRadioMC:processing event group 1380/2000 and shower 1035/2459 (13 showers triggered) = 42.1%, ETA 17m2s, time consumption: ray tracing = 9% (att. length 76%), askaryan = 87%, detector simulation = 4% reading input = 0%, calculating weights = 0%, distance cut 0%, unaccounted = 0%
STATUS:NuRadioMC:processing event group 1453/2000 and shower 1088/2459 (16 showers triggered) = 44.2%, ETA 16m54s, time consumption: ray tracing = 8% (att. length 76%), askaryan = 87%, detector simulation = 5% reading input = 0%, calculating weights = 0%, distance cut 0%, unaccounted = 0%
STATUS:NuRadioMC:processing event group 1608/2000 and shower 1211/2459 (18 showers triggered) = 49.2%, ETA 14m53s, time consumption: ray tracing = 8% (att. length 76%), askaryan = 87%, detector simulation = 5% reading input = 0%, calculating weights = 0%, distance cut 0%, unaccounted = 0%
STATUS:NuRadioMC:processing event group 1695/2000 and shower 1277/2459 (20 showers triggered) = 51.9%, ETA 14m21s, time consumption: ray tracing = 8% (att. length 76%), askaryan = 87%, detector simulation = 5% reading input = 0%, calculating weights = 0%, distance cut 0%, unaccounted = 0%
STATUS:NuRadioMC:processing event group 1855/2000 and shower 1407/2459 (20 showers triggered) = 57.2%, ETA 12m21s, time consumption: ray tracing = 8% (att. length 76%), askaryan = 87%, detector simulation = 5% reading input = 0%, calculating weights = 0%, distance cut 0%, unaccounted = 0%
STATUS:NuRadioMC:processing event group 1955/2000 and shower 1488/2459 (20 showers triggered) = 60.5%, ETA 11m27s, time consumption: ray tracing = 8% (att. length 76%), askaryan = 87%, detector simulation = 5% reading input = 0%, calculating weights = 0%, distance cut 0%, unaccounted = 0%
STATUS:NuRadioMC:start saving events
STATUS:NuRadioMC:fraction of triggered events = 16/2000 = 0.007 (sum of weights = 11.84)
STATUS:NuRadioMC:Veff = 0.8034 km^3, Veffsr = 10.1 km^3 sr
STATUS:NuRadioMC:Timing of NuRadioReco modules
efieldToVoltageConverter: 6s 14.5%
channelResampler: 0s 0.0%
hardwareResponseIncorporator: 35s 74.4%
triggerSimulator: 2s 4.4%
channelSignalReconstructor: 3s 6.6%
STATUS:NuRadioMC:2459 events processed in 17m45s = 433.16ms/event (0.0% input, 8.3% ray tracing, 86.8% askaryan, 4.8% detector simulation, 0.0% output, 0.1% weights calculation)
`
The uncertainty in the travel time calculation of the analytic formula seems to be around 0.1ns. This is too large when used for DnR calculations. I think this uncertainty was introduced when the Greenland ice model was added and the get_z_deep
function was added to the analytic ray tracer. This function automatically calculates the depth for which the ice can be assumed to be uniform (n(z) = const). Changing the condition to be stricter, increases precision.
I changed the value in ac8262c, but we should make a thorough test of the time resolution for different ice models by comparing to the numerical calculation.
To keep track of where tau decays (or in general any decay comes from) we should save all previous interactions into the output file even if they don't occur in the fiducial volume. In the simulation, those events should just be skipped.
In RNO we are going to have around 20 channels, but we are usually only simulating triggers on the four channels of the phased array. Since most simulated events won't trigger, it would be nice to be able to first simulate the signal for the trigger channels, check if there is a trigger and only simulate the other channels if there actually was a trigger. This would speed up the simulations by quite a lot.
I've found that our implementation of ARZ fails at 90 degrees, after Ilse found a bunch of triggering events near that angle. Here's an example for a 1 EeV hadronic shower at 1 km and 90 degrees of viewing angle.
The pulse is shaky, periodic, and is larger than the corresponding pulse at the Cherenkov angle, which shouldn't happen. ARZ should work at 90 degrees perfectly, as you can see in Fig. 5 from the paper (observer at 10,0,0): https://arxiv.org/pdf/1106.6283.pdf
So, first of all, I propose restricting the delta_C_cut to 25 degrees when dealing with the ARZ model, maybe making it 25 degrees by default for all models. This would be a quick fix. However, we should find out what's going on with our implementation.
Since we want to merge NuRadioReco over, we should also move the linter functionality over. Everyone needs to help though cleaning up the code.
@christophwelling can you start in a PR and then we all help?
Or who volunteered to do this? I forgot ... sorry :/
The pulse example throws a lot of warnings like
WARNING: ErfaWarning: ERFA function "dtf2d" yielded 1 of "dubious year (Note 6)" [astropy._erfa.core]
WARNING:astropy:ErfaWarning: ERFA function "dtf2d" yielded 1 of "dubious year (Note 6)"
/Users/cglaser/software/NuRadioMC/NuRadioMC/utilities/cross_sections.py:28: RuntimeWarning: invalid value encountered in log
l_eps = np.log(epsilon - c[0])
STATUS:NuRadioMC:processing 244/1000 (12 triggered) = 24.4%, ETA 28s, time consumption: ray tracing = 46% (att. length 34%), askaryan = 0%, detector simulation = 52% reading input = 0%, calculating weights = 1%, distance cut 0%, unaccounted = 1%
/Users/cglaser/software/NuRadioReco/NuRadioReco/modules/channelSignalReconstructor.py:126: RuntimeWarning: invalid value encountered in sqrt
SNR['integrated_power'] = np.sqrt(SNR['integrated_power'])
Currently NuRadioMC can only use the normal detector. It would be nice if the GenericDetector class I implemented in NuRadioReco could also be used.
Example 03 only works on a cluster -- if I am not mistaken. Maybe we should add this information to the example.
Example 04 doesn't contain anything. The name also suggests that it is extremely outdated ... ari files??? @cg-laser do you remember what this was supposed to do?
Example 05 doesn't work for me -- see #260
The branching ratios of a tau decay are
~18% e- + 2nu's -> EM shower
~17% muon + 2nu's -> no shower (but maybe muon dE/dX energy loss?)
~64% hadronic decay -> starts hadronic shower, but a different shower than a nu NC interaction that starts right away with hundreds of partons)
The decay channel should be simulated in the event generator.
For EM decays: Is all energy deposited into the electron? Or is also a significant fraction of the energy carried by the neutrinos?
When dealing with the built-bot #164 we found that the python ray tracing results are stable to 1e-6 accross plattforms, the C++ results only to 0.05%. This is relative uncertainty on the signal amplitude.
While not affecting the large picture, this seems worrisome and should be followed up at some point.
So far, it is somewhat unintuitive, where the properties of the parent particle are stored. We should continue to discuss to change this and come up with good ideas.
I made a few tests with the RalstonBuniy module and it seems to work fine. I couldn't reproduce any problems that Brian saw previously which were probably just related to a bug in original the iFFT implementation. However, while testing the module I found another issue that need to be solved before using it (for timing studies) in NuRadioMC.
As illustrated by the two plots below, the pulse time behaves weirdly and makes this module impossible to use at the moment for timing studies. I think the reason is that the phase is just modified according to the distance from emitter to source. But the phase is 2pi periodic, hence, at certain distances the pulse just moves from the end of the trace to the beginning. It can even happen that part of the pulse is at the beginning and the other part of the pulse is at the end of the trace.
We can't just shift the pulses to always be in the middle of the trace because we want to keep track of the physical time delays that we get when moving off the cherenkov cone. I think the best way would be if the module had the pulse arrival time on the chrerenkov cone at a fixed time. The time delay due to different distances to the receivers are anyway calculated via ray tracing.
This might actually also impact the trigger modules, as we require a time coincidence between antennas.
I added the plotting script in the branch 'askaryan-hanson' where also further improvements of this module should take place.
This is to remind us that on January 1st 2020, python 2.7 will no longer be maintained. At some point before that we should make sure that everything runs with python 3.
I would like to change how the effective volume is calculated. Currently the "water equivalent effective volume steradian" is calculated. Which is the original effective volume x Omega (typically 4pi) x rho_water/rho_ice. This definition has rather historic reasons than being very useful. It is also error prone because one needs to remember to get the interaction length of neutrinos in water to convert to a limit, and for the effective area one needs to remember the Omega of the simulation.
I would like to change it to just give the effective volume (for ice).
And we actually got it wrong in the limit calculation. We calculate the interaction length for ice, but the effective volume was converted to 'water'. So we're actually10% better...
Because changing the utility script will change the results of everyone using it, I wanted to discuss it first before changing anything. What do you think?
loop over all stations available in the detector description. Currently the user needs to specify a specific station.
After talking to Krijn, I'm convinced that the refractive index at the origin cannot be used for calculations. The coherence at the observer position is determined by the observation times, and those are defined by the average index of refraction along the path. If I apply my "spherical ball in the far field" argument to the atmosphere and use the refractive index at the origin I don't get at the correct ZHAireS ZHS formula, which uses the average index. The change to average index will affect parameterisations and the ARZ formula, modifying the Cherenkov cone position, hopefully by not-that-much.
If we want to use the average index, however, a little restructuring of the code must take place. We need to calculate it upon ray tracing and then feed it to the SignalGen module. Two matters must be addressed.
We want to implement multiple showers or triggers coming from the same particle, e.g. a double bang event or DnR causing 2 separate triggers.
From the reconstruction/NuRadioReco side, the plan is to let these events have different ids and draw the connection by storing the IDs of the associated events in the event class. We then give the event reader a function that returns the associated events belonging to some given event.
From the simulation/NuRadioMC side, this is a bit more tricky. The solution in the HDF5 files is to give events like this the same ID. This is not a good solution, since we want to give them different IDs in the .nur files (and the eventwriter will refuse to write events with the same ID into the same file) but still be able to identify the same events in the .nur and HDF5 files.
Our idea was to add a field "particle ID" (or some better name we come up with) to the HDF5 files. This ID can be the same for events caused by the same particle so we know which events belong together while the eventID is different for each event and is the same as the one written into the .nur files.
Does anyone have any reservations about this plan?
There are some utilities (e.g. fft and units) that are just copies of a utility in NuRadioReco. Since NuRadioMC has NuRadioReco as a dependency anyway, I suggest we remove these duplicates and use the ones from NuRadioReco. Otherwise, this is just additional maintenance and can lead to errors.
In general, the goal of the event generator is to produce a distribution of neutrino interactions in our simulation volume (a cylinder) that corresponds to an isotropic flux.
We recently changed the event generation to generate a zenith distribution that is weighted with sin(theta) AND the projected area of the cylinder (our simulation volume). The idea behind this was that the neutrino flux is isotropic, i.e. constant per unit area perpendicular to its direction. However, the flaw in the argument is that the slant depth (the amount of matter the neutrino passes through our simulation volume) will depend strongly on where it enters the cylinder. For vertical events, the slant depth is the same, but for all other directions, the slant depth varies. If we correct for this effect, it should invert the previous correction for the projected area, resulting back in a sin(theta) distribution as we were using all along.
To better see why the projected area weighting is wrong, just consider two simulations for a small zenith angle bin where the projected area is constant. With the projected area prescription, we would average the effective volume from each zenith angle bin weighted the projected area of this bin. Further assume that a radius of 5km of the cylinder is large enough to contain any possible triggered events. Now we should get the same result if we increased the radius to e.g. 10km. We also get exactly the same Veff per zenith angle bin, but the projected area changes, which results in a different averaging. This should not be the case, thus the prescription is wrong.
To test it, I implemented an unforced event generation where an isotropic flux of neutrinos is propagated through the simulation volume and only those events interacting in the volume are saved. #186
Write an event generator for calibration pulses.
As @christophwelling found, there are some strange timing offsets in LPM showers.
We discussed that this may be related to the fact that we calculate the ray tracing from the vertex instead of the shower maximum. It may also be possible to have scenarios in which subshowers interfer. What is everyone's thought about this? Could we simulate this better?
People can put in all other new physics et al., but they need to know what is basically required.
In order to improve on lepton propagation, it seems logical to add the option of using:
An update to the parameterization has been developed by Alvarez-Muniz et al. (Askaryan radiation from neutrino-induced showers in ice)
It has been submitted to PRD and will be available on arxiv shortly. We should add an additional model in NuRadioMC for this.
If you would like to help, please contact @cg-laser, @daniel-zid or @anelles for access to the pre-print article.
In the pulser calibration example, the simulation tries to calculate the effective Volume. Since the detector volume can not be calculated, this crashes. The error is caught and only a logger output is given, but this should be handled better.
See also discussion in NuRadioReco.
We should aim for a version 1.0 and a release.
examples/01_Veff_simulation/T03visualizeVeff.py
Hardcoded path should be removed. I am unsure what it should be.
Just executing the example fails: I assume that the input file needs updating to format revisions.
Traceback (most recent call last):
File "E01detector_simulation.py", line 95, in
sim.run()
File "/Users/anelles/InIceSim/NuRadioMC/NuRadioMC/simulation/simulation2.py", line 177, in run
self._read_input_neutrino_properties()
File "/Users/anelles/InIceSim/NuRadioMC/NuRadioMC/simulation/simulation2.py", line 525, in _read_input_neutrino_properties
self._inttype = self._fin['interaction_type'][self._iE]
KeyError: 'interaction_type'
The output in the effective volume example seems weird. It looks like the triggers are going down from 8 to 7 to 6
STATUS:NuRadioMC:processing 1252/1252 (8 triggered) = 100.0%, ETA 0s, time consumption: ray tracing = 69% (att. length 40%), askaryan = 1%, detector simulation = 26% reading input = 0%, calculating weights = 2%, distance cut 0%, unaccounted = 1%
STATUS:NuRadioMC:start saving events
STATUS:NuRadioMC:fraction of triggered events = 7/1000 = 0.006 (sum of weights = 4.60)
STATUS:NuRadioMC:Veff = 0.624 km^3, Veffsr = 7.841 km^3 sr
STATUS:NuRadioMC:Timing of NuRadioReco modules
efieldToVoltageConverter: 1s 49.4%
channelResampler: 0s 4.7%
channelBandPassFilter: 0s 14.2%
channelBandPassFilter: 0s 14.2%
triggerSimulator: 0s 9.7%
triggerSimulator: 0s 3.9%
triggerSimulator: 0s 3.9%
The logarithms for l1 are -inf sometimes. This tends to happen with the Greenland refractive index model and for refracted solutions. Since for Greenland we expect less refracted solutions due to the ice properties, I ignore these (relatively few) cases. I am going to push this workaround to a new branch.
If there's no reflection parameter in the medium, get_path() crashes because it tries to calculate bottom reflections when it shouldn't:
xx, zz = self.__r2d.get_path_reflections(self.__x1, self.__x2, result['C0'], n_points=n_points, reflection=result['reflection'], reflection_case=result['reflection_case'])
Just a switch would suffice, but we don't want to touch the ray tracing module without asking first. Can you write a valid condition to skip this line, @cg-laser?
The number of solutions for the ray tracing problem is not stable as a function of the root finding tolerance parameter tol
in analyticraytracing.py
.
For example, the following code:
x1 = np.array([478., 0., -149.])
x2 = np.array([1000., 0., -90.]) # direct ray solution
ice = medium.ARAsim_southpole()
r = ray.ray_tracing(x1,x2,ice)
r.find_solutions()
Returns only one solution when tol=1e-4
but has two solutions when tol=1e-5
.
The GSL integrator in CPP raytracer seems to fail at some point. These are the last few lines of output I got from the simulation:
WARNING:sim:processing event 8000/100000 = 8.0%, ETA 0:27:18.695156, time consumption: ray tracing = 73% (att. length 20%), detector simulation = 14%
INFO:sim:event triggered
INFO:sim:event triggered
INFO:sim:event triggered
gsl: qags.c:553: ERROR: bad integrand behavior found in the integration interval
Default GSL error handler invoked.
Aborted (core dumped)
Currently, the attenuation length model is part of the ray tracing code. We should move it into a separate utility class or even integrate it into the medium class.
A complication is that we also need to make the attenuation length available to the C++ implementation of the ray tracer. The clean solution would be to implement the attenuation length parameterization in C++ and provide a python wrapper. As this is just a static model, we could also just replicate the class in C (as we did for the units definition).
There are actually two but related issues:
In rare cases both the direct and reflected solution are returned for almost the same ray. Receive angles differ by a fraction of a degree
In [13]: sim_station.get_electric_fields()[0][efp.zenith]/units.deg
Out[13]: 37.990554573137445
In [14]: sim_station.get_electric_fields()[1][efp.zenith]/units.deg
Out[14]: 37.98455113752616
In [9]: sim_station.get_electric_fields()[0][efp.ray_path_type]
Out[9]: 'direct'
In [10]: sim_station.get_electric_fields()[1][efp.ray_path_type]
Out[10]: 'reflected'
This must be a bug as the direct solution should originate from below.
More tests for different aspects on the code. Needed before we switch over to Python 3
the cross section model should be definable in the central config and then used accordingly by all modules/utilties, e.g., the Earth attenuation. The NuRadioMC paper needs to be updated accordingly.
The built-bot complains:
/home/travis/build/nu-radio/NuRadioMC/NuRadioMC/simulation/simulation2.py:81: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.