Giter VIP home page Giter VIP logo

odas_ros's Introduction

odas_ros

This package is a ROS package for ODAS.

IntRoLab - Université de Sherbrooke

ODAS Demonstration

Authors

  • Marc-Antoine Maheux
  • Cédric Godin
  • Simon Michaud
  • Samuel Faucher
  • Olivier Roy
  • Vincent Pelletier
  • Philippe Warren
  • François Grondin

License

GPLv3

Prerequisites

You will need CMake, GCC and the following external libraries:

sudo apt-get install cmake gcc build-essential libfftw3-dev libconfig-dev libasound2-dev libpulse-dev libgfortran-*-dev perl 

ODAS ROS uses the audio utilities from AudioUtils so it should be installed in your catkin workspace. If it is not already, here is how to do so:

Clone AudioUtils in your catkin workspace:

git clone https://github.com/introlab/audio_utils.git

Install dependencies:

sudo apt-get install gfortran texinfo
sudo pip install libconf

In the cloned directory of audio_utils, run this line to install all submodules:

git submodule update --init --recursive

If you get errors when building with catkin_make, you can modify the cmake file of audio_utils to add C++ 14 compiler option.

add_compile_options(-std=c++14)

Installation

First, you need to clone the repository in your catkin workspace.

git clone https://github.com/introlab/odas_ros.git

In the cloned directory of odas_ros, run this line to install all submodules:

git submodule update --init --recursive

Hardware configuration

For ODAS to locate and track sound sources, it needs to be configured. There is a file (configuration.cfg) that is used to provide ODAS with all the information it needs. You will need the position and direction of each microphones. See ODAS Configuration for details.

Here are the important steps:

Input configuration

Source configuration using pulseaudio

At this part of the configuration file, you need to set the correct pulseaudio device and channel mapping.

# Input with raw signal from microphones
    interface: {
        type = "pulseaudio";
        #"pacmd list-sources | grep 'name:' && pacmd list-sources | grep 'channel map:'" to see the sources and their channel mapping, in the same order
        source = "alsa_input.usb-SEEED_ReSpeaker_4_Mic_Array__UAC1.0_-00.multichannel-input";
        channelmap = ("front-left", "front-right", "rear-left", "rear-right", "front-center", "lfe");

To know your source name and channel mapping, the easiest way it to use pacmd list-sources | grep 'name:' && pacmd list-sources | grep 'channel map:' in a terminal. The output should look something like this:

	name: <alsa_output.pci-0000_00_1f.3.analog-stereo.monitor>
	name: <alsa_input.pci-0000_00_1f.3.analog-stereo>
	name: <alsa_input.usb-SEEED_ReSpeaker_4_Mic_Array__UAC1.0_-00.multichannel-input>
	channel map: front-left,front-right
	channel map: front-left,front-right
	channel map: front-left,front-right,rear-left,rear-right,front-center,lfe

Note that the names and channel maps are in the same order. In this case, the source name is alsa_input.usb-SEEED_ReSpeaker_4_Mic_Array__UAC1.0_-00.multichannel-input and the mapping is front-left,front-right,rear-left,rear-right,front-center,lfe. The mapping will need to be formated in a list: front-left,front-right,rear-left,rear-right,front-center,lfe will become channelmap = ("front-left", "front-right", "rear-left", "rear-right", "front-center", "lfe");

Alternative: sound card configuration using ALSA

At this part of the configuration file, you need to set the correct card and device number.

# Input with raw signal from microphones
    interface: {    #"arecord -l" OR "aplay --list-devices" to see the devices
        type = "soundcard_name";
        devicename = "hw:CARD=1,DEV=0";

To know what is your card number, plug it in your computer and run arecord -l in a terminal. The output should look something like this:

**** List of CAPTURE Hardware Devices ****
card 0: PCH [HDA Intel PCH], device 0: ALC294 Analog [ALC294 Analog]
  Subdevices: 1/1
  Subdevice #0: subdevice #0
card 1: 8_sounds_usb [16SoundsUSB Audio 2.0], device 0: USB Audio [USB Audio]
  Subdevices: 1/1
  Subdevice #0: subdevice #0

In this case, the card number is 1 and the device is 0 for the 16SoundsUSB audio card. So the device name should be: "hw:CARD=1,DEV=0";.

Mapping

Depending on your configuration, you will need to map the microphones from the soundcard to the software. If you wish to use all microphones, then you can map all of them. For example, if there is 16 microphones and wish to use them all:

mapping:
{
    map: (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16);
};

But if only certain microphones should be used, they can be mapped. For example, if I only wish to use microphones 1 to 4 and 6 to 10 and 12:

mapping:
{
    map: (1,2,3,4,6,7,8,9,10,12);
};

They will be mapped to microphones 1 to 10.

Microphone configuration

For ODAS to precisely locate and track a sound source, it needs to know precisely the microphone position. The frame of reference used to measure the microphones position will end up being the one used for the sound tracking. Here is an example of a microphone configuration. It is easier if the reference point is located in the center of the microphones.

# Microphone 1 #TODO
        {
            mu = ( 0.122376, 0.08144437, 0.042 );
            sigma2 = ( +1E-6, 0.0, 0.0, 0.0, +1E-6, 0.0, 0.0, 0.0, +1E-6 );
            direction = ( -0.08144, -0.12238, 0.0 );
            angle = ( 80.0, 100.0 );
        },

For Microphone 1, mu is the position in x, y and z from the reference point. sigma2 is the position variance in xx, xy xz, yx, yy, yz, zx, zy, zz this setting should mainly remain untouched. The direction parameter is the direction of the microphone. It should be a unit vector pointing in the direction that the microphone is pointing relative to the reference frame. The angle parameter is the maximum angle at which gain is 1 and minimum angle at which gain is 0.

Sound Source Localization, Tracking and Separation

ODAS can output the sound source localization, the source source tracking and the sound source separation:

  • Sound Source Localization: All the potential sound sources in the unit sphere. Each sound source position on the unit sphere and its energy.
  • Sound Source Tracking: The most probable location of the sound source is provided (xyz position on the unit sphere).
  • Sound Source Separation: An Audio Frame of the isolated sound source is provided.

Depending on what type of information will be used, the configuration file needs to be modified. For example, if I need only the Sound Source Tracking, the configuration file should be modified. The only thing that should be changed is the format and interface for each type of data. The required format if it is enabled is json and the interface type should be socket. If it is disabled, the format can be set to undefined and the interface type to blackhole.

For example, if the only data that will be used is the sound source tracking:

  • In the # Sound Source Localization section, this should be modified to look like this:
potential: {

        #format = "json";

        #interface: {
        #    type = "socket";
        #    ip = "127.0.0.1";
        #    port = 9002;
        #};

        format = "undefined";

        interface: {
           type = "blackhole";
        };
  • In the # Sound Source Tracking section, this should be modified to look like this:
# Output to export tracked sources
    tracked: {

        format = "json";

        interface: {
            type = "socket";
            ip = "127.0.0.1";
            port = 9000;
        };
    };
  • In the # Sound Source Separation section, this should be modified to look like this:
separated: { #packaging and destination of the separated files

        fS = 44100;
        hopSize = 256;
        nBits = 16;

        interface: {
           type = "blackhole";
        };

        #interface: {
        #    type = "socket";
        #    ip = "127.0.0.1";
        #    port = 9001;
        #}
    };

Note that if an interface type is set to "blackhole" and the format to "undefined", the associated topic won't be published.

Sound Source Tracking Threshold adjustment

The default configuration file should be correct for most configuration. However, if the Sound Source Tracking does not work (i.e. the published topic /odas/sst does not contain any sources or the sources are indesirable) it may be because the threshold is not set correctly.

In the Source Source Tracking section of the configuration file, there is a section with active and inactive:

# Parameters used by both the Kalman and particle filter

   active = (
       { weight = 1.0; mu = 0.3; sigma2 = 0.0025 }
   );

   inactive = (
       { weight = 1.0; mu = 0.15; sigma2 = 0.0025 }
   );

The active parameter represents the limit to consider a sound source active (high limit) and theinactive parameter is the lower limit at which a sound source is considered inactive.

  • If mu is too high in the active and inactive parameters, few sound sources will be considered like active.
  • If mu in the active and inactive parameters are set too low, too much sound sources will be considered active.

odas_ros's People

Contributors

chcaya avatar doumdi avatar francoisgrondin avatar jeremiebourque1 avatar mamaheux avatar oliroy92 avatar philippewarren avatar vincentpelletier1 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

odas_ros's Issues

x, y and z

x, y, and z are generated as from what and the range of -1 to 1 is what scale.

Can you please refer me to anything that can explain it?

rostopic echo /odas/ssl gives no output

After installing odas_ros package and sourcing devel/setup.bash I run roslaunch odas_ros odas.launch and I get 4 topics:
/odas/ssl
/odas/ssl_pcl2
/odas/sss
/odas/sst
/odas/sst_poses

but when I run rostopic echo /odas/ssl or any other topic name they do not print anything on screen. What am I doing wrong?
Also no visualization window opens.

odas_ros config problem for ReSpeaker USB mic array

I got some errors when configure for ReSpeaker USB mic array. This device has 4 mics.

If I set the nBits = 32, I got the error: "Source hops: Cannot set sample format: Invalid argument".
If I set the nChannels = 4, I got the error: "Source hops: Cannot set channel count: Invalid argument".

It's still working with this default config anyway:
raw:
{
fS = 16000;
hopSize = 128;
nBits = 16;
nChannels = 6;
...

Is there a ref configuration for the ReSpeaker USB mic array?
Quan

Sink tracks: Cannot connect to server

Hi I am currently working with the respeaker 4 mic array voice interface. I'm trying to find the location of the sound source thanks to the odas_ros packages. After compiling this package, when I run the launch, I first get this error '' Sink tracks: Cannot connect to server''.
Screenshot from 2021-04-29 11-01-13

I said to myself as I had not launched odas studio, the error is surely due to that. So I first launched odas studio (npm start) then the launch file (roslaunch odas_ros odas.launch) I get a new error "error: [Errno 98] Address already in use"
Screenshot from 2021-04-29 11-02-24

Could someone help me

odas_ros (ROS2) in a Dockerfile

I have been trying to set up odas_ros from the ros2-migration branch and I keep getting errors when attempting to build
my Dockerfile looks like this:

"""
FROM ros:humble-ros-base
RUN apt update && apt upgrade -y
RUN apt install -y git-all cmake gcc build-essential libfftw3-dev libconfig-dev libasound2-dev libpulse-dev libgfortran-*-dev perl
RUN apt install -y python3-pip
RUN apt-get install -y gfortran texinfo

RUN /bin/bash -c 'source /opt/ros/humble/setup.bash &&
mkdir -p ~/ament_ws/src &&
cd ~/ament_ws/src &&
git clone -b ros2-migration https://github.com/introlab/audio_utils.git &&
pip install libconf &&
cd audio_utils &&
git submodule update --init --recursive &&
cd ~/ament_ws/src &&
git clone -b ros2-migration https://github.com/introlab/odas_ros.git &&
cd odas_ros &&
git submodule update --init --recursive &&
cd ~/ament_ws &&
source /opt/ros/humble/setup.bash &&
rosdep install --from-paths src -i -r -n -y &&
colcon build &&
source install/setup.bash'

ADD config/respeaker_usb_4_mic_array.cfg ~/ament_ws/src/odas_ros/config/configuration.cfg
"""

It seems to fall apart trying to build audio_utils...

CMake Error at /opt/ros/humble/share/ament_cmake_python/cmake/ament_python_install_package.cmake:122 (add_custom_target):
add_custom_target cannot create target
"ament_cmake_python_copy_audio_utils" because another target with the same
name already exists. The existing target is a custom target created in
source directory "/root/ament_ws/src/audio_utils". See documentation for
policy CMP0002 for more details.
Call Stack (most recent call first):
/opt/ros/humble/share/ament_cmake_python/cmake/ament_python_install_package.cmake:39 (_ament_cmake_python_install_package)
CMakeLists.txt:125 (ament_python_install_package)

CMake Error at /opt/ros/humble/share/ament_cmake_python/cmake/ament_python_install_package.cmake:141 (add_custom_target):
add_custom_target cannot create target
"ament_cmake_python_build_audio_utils_egg" because another target with the
same name already exists. The existing target is a custom target created
in source directory "/root/ament_ws/src/audio_utils". See documentation
for policy CMP0002 for more details.
Call Stack (most recent call first):
/opt/ros/humble/share/ament_cmake_python/cmake/ament_python_install_package.cmake:39 (_ament_cmake_python_install_package)
CMakeLists.txt:125 (ament_python_install_package)

Oh... and I am trying to build and run the docker container on a RaspberryPi 5

Thanks!

Kyle

Why the ROS GUI is updating super slowly?

When it is running properly, the ROS GUI is providing dots continously and there will always be dots visible on the screen. However, when I start ROS, the display has a big delay in providing information - It repeats the loop of showing dots for 0.3 seconds and not showing anything for 0.3 seconds.

I have tried to restart the computer and replug the microphone, does not work.
Can anyone help pls?

Matching sound separation sources with sound tracked sources

Hello, I'm working on integrating odas_ros into our ros4hri pipeline.

As you can see, to each human, we would like to assign a <voice_id> and the audio source to improve the speech recognition. To my understanding, the current version of the sound source separation seems to output only an AudioFrame which contains up to 4 different sources but it does not match them to those that are tracked. Is that correct? If so, are you planning to develop it in the near future?

Thank you.

odas_ros and audio_utils output

I have setup odas_ros using <alsa_input.usb-SEEED_ReSpeaker_4_Mic_Array__UAC1.0_-00.multichannel-input> and the sound source localization from the topic /odas/ssl seems to be correct. Is there anyway to configure odas_ros to output [audio_utils/AudioFrame] messages from one or more of the input channels ? I would like to run sound classification alongside the sound source location from odas_ros.

ROS audio play

Is there any documentation available for this package? Specifically, does this package stream audio from individual audio sources. Does it provide the azimuths of audio sources on any of the topics?

/odas/sss, /odas/sst, /odas/ssl doesn't publish on remote

I tried to set up odas_ros on a device so that I can publish to another device with ROS IP connection. Although I succeeded in getting data from /odas/sst_pose and /odas/ssl_pcl2 and I could visualize from rviz, the other three topics did not publish any data on the remote device. On the host device though, all five topics do publish data. Am I missing something?

Missing MusicBeatDetector

I'm trying to install this package on ros_noetic using catkin build, and am unable to compile it following the instructions.
I've also tried older checkouts but am unable to fix the issue.
I'm not sure if it's because of odas_ros or audio_utils.

Any help would be appreciated.

Project 'odas_ros' tried to find library 'MusicBeatDetector'.  The library
  is neither a target nor built/installed properly.  Did you compile project
  'audio_utils'? Did you find_package() it before the subdirectory containing
  its code is included?

ROS GUI updating super slowly

When it is running properly, the ROS GUI is providing dots continously and there will always be dots visible on the screen. However, when I start ROS, the display has a big delay in providing information - It repeats the loop of showing dots for 0.3 seconds and not showing anything for 0.3 seconds.

I have tried to restart the computer and replug the microphone, does not work.
Can anyone help pls?

Running odas_ros on remote computer

Is it possible somehow to send data to a remote computer (with Ubuntu OS) from a raspberry pi zero (with Raspbian Buster OS) which is connected to the Respeaker 4 mic array? I am running the raspberry pi zero over ssh connection in the remote computer. It is noteworthy that I have installed odas on pi and odas_web (and odas) on the remote computer and I could run odas_web with no issues. If I put the question differently, how can I use odas_ros in the same way odas_web operates with remote computer?

empty RVIZ and "Source hops: Cannot set sample format "

Hey, I tried to launch the odas_ros with a Respeaker MicArray v2.0 and used the configuration file that already worked with odas_web.
When I try to startup odas_ros (roslaunch odas_ros odas.launch) I now get the following Message:

....
process[odas/rviz-5]: started with pid [12432]
[ INFO] [1625570165.067320718]: Using configuration file = /home/onan/catkin_ws/src/odas_ros/config/configuration_ReSpeaker.cfg
[ INFO] [1625570165.067933051]: | + Initializing configurations...... 
[ INFO] [1625570165.123208075]: | + Initializing objects............. 
[ INFO] [1625570165.352487865]: | + Launch threads................... 
[ INFO] [1625570165.353189862]: | + ROS SPINNING................... 
Source hops: Cannot set sample format: Invalid argument

Then RVIZ is starting up, but there is no Input being displayed even tho I make loud noises.
I think that the missing input in RVIZ is a result caused by the "invalid argument" message.

I could not find any clues about which sample format is invalid by looking in the logs. Do you have an Idea on why this happens? I hope to get useful advice from you so I can get it running shortly.

So here is my config file:

version = "2.1";
# Raw
raw: 
{
    fS = 16000;
    hopSize = 128;
    nBits = 32;
    nChannels = 4; 

   # fS = 16000;
   # hopSize = 512;
   # nBits = 32;
   # nChannels = 16; 

    # Input with raw signal from microphones
    interface: {                                  #"arecord -l" OR "aplay --list-devices" to see the devices
	 card = 1;
	 device = 0;
	 type = "soundcard";
	 devicename = "hw:CARD=1,DEV=0";
    }
}

# Mapping

mapping:
{

    map: (1, 2, 3, 4);

}

# General

general:
{
    
    epsilon = 1E-20;

    size: 
    {
        hopSize = 128;
        frameSize = 256;
    };
    
    samplerate:
    {
        mu = 16000;
        sigma2 = 0.01;
    };

    speedofsound:
    {
        mu = 343.0;
        sigma2 = 25.0;
    };

    mics = (
        
        # Microphone 1
        { 
            mu = ( -0.0405, +0.0000, +0.0000 ); 
            sigma2 = ( +0.000, +0.000, +0.000, +0.000, +0.000, +0.000, +0.000, +0.000, +0.000 );
            direction = ( +0.000, +0.000, +1.000 );
            angle = ( 80.0, 90.0 );
        },

        # Microphone 2
        { 
            mu = ( +0.0000, +0.0405, +0.0000 ); 
            sigma2 = ( +0.000, +0.000, +0.000, +0.000, +0.000, +0.000, +0.000, +0.000, +0.000 );
            direction = ( +0.000, +0.000, +1.000 );
            angle = ( 80.0, 90.0 );
        },

        # Microphone 3
        { 
            mu = ( +0.0405, +0.0000, +0.0000 ); 
            sigma2 = ( +0.000, +0.000, +0.000, +0.000, +0.000, +0.000, +0.000, +0.000, +0.000 );
            direction = ( +0.000, +0.000, +1.000 );
            angle = ( 80.0, 90.0 );
        },

        # Microphone 4
        { 
            mu = ( +0.0000, -0.0405, +0.0000 ); 
            sigma2 = ( +0.000, +0.000, +0.000, +0.000, +0.000, +0.000, +0.000, +0.000, +0.000 );
            direction = ( +0.000, +0.000, +1.000 );
            angle = ( 80.0, 90.0 );
        }

    );

    # Spatial filters to include only a range of direction if required
    # (may be useful to remove false detections from the floor, or
    # limit the space search to a restricted region)
    spatialfilters = (

        {

            direction = ( +0.000, +0.000, +1.000 );
            angle = (80.0, 90.0);

        }

    );  

    nThetas = 181;
    gainMin = 0.25;

};

# Stationnary noise estimation

sne:
{
    
    b = 3;
    alphaS = 0.1;
    L = 150;
    delta = 3.0;
    alphaD = 0.1;

}

# Sound Source Localization

ssl:
{

    nPots = 4;
    nMatches = 10;
    probMin = 0.5;
    nRefinedLevels = 1;
    interpRate = 4;

    # Number of scans: level is the resolution of the sphere
    # and delta is the size of the maximum sliding window
    # (delta = -1 means the size is automatically computed)
    scans = (
        { level = 2; delta = -1; },
        { level = 4; delta = -1; }
    );

    # Output to export potential sources
    potential: {

        format = "undefined";
        #format = "json";

        interface: {
             type = "blackhole";
            #type = "socket"; ip = "127.0.0.1"; port = 9002;
        };
    };

};

# Sound Source Tracking

sst:
{  

    # Mode is either "kalman" or "particle"

    mode = "kalman";

    # Add is either "static" or "dynamic"

    add = "dynamic";

    # Parameters used by both the Kalman and particle filter

    active = (
        { weight = 1.0; mu = 0.03; sigma2 = 0.0025 }
    );

    inactive = (
        { weight = 1.0; mu = 0.015; sigma2 = 0.0025 }
    );

    sigmaR2_prob = 0.0025;
    sigmaR2_active = 0.0225;
    sigmaR2_target = 0.0025;
    Pfalse = 0.1;
    Pnew = 0.1;
    Ptrack = 0.8;

    theta_new = 0.9;
    N_prob = 5;
    theta_prob = 0.8;
    N_inactive = ( 150, 200, 250, 250 );
    theta_inactive = 0.9;

    # Parameters used by the Kalman filter only

    kalman: {

        sigmaQ = 0.001;
        
    };
   
    # Parameters used by the particle filter only

    particle: {

        nParticles = 1000;
        st_alpha = 2.0;
        st_beta = 0.04;
        st_ratio = 0.5;
        ve_alpha = 0.05;
        ve_beta = 0.2;
        ve_ratio = 0.3;
        ac_alpha = 0.5;
        ac_beta = 0.2;
        ac_ratio = 0.2;
        Nmin = 0.7;

    };

    target: ();

    # Output to export tracked sources
    tracked: {

        format = "undefined";
        #format = "json";

        interface: {
            type = "blackhole";
            #type = "socket"; ip = "127.0.0.1"; port = 9000;
        };

    };

}

sss:
{
    
    # Mode is either "dds", "dgss" or "dmvdr"

    mode_sep = "dds";
    mode_pf = "ms";

    gain_sep = 1.0;
    gain_pf = 10.0;

    dds: {

    };

    dgss: {
        mu = 0.01;
        lambda = 0.5;
    };

    dmvdr: {

    };

    ms: {

        alphaPmin = 0.07;
        eta = 0.5;
        alphaZ = 0.8;        
        thetaWin = 0.3;
        alphaWin = 0.3;
        maxAbsenceProb = 0.9;
        Gmin = 0.01;
        winSizeLocal = 3;
        winSizeGlobal = 23;
        winSizeFrame = 256;

    };

    ss: {

        Gmin = 0.01;
        Gmid = 0.9;
        Gslope = 10.0;

    }

    separated: {

        fS = 44100;
        hopSize = 512;
        nBits = 16;        

        interface: {
            #type = "file";
            #path = "separated.raw";
	    type = "blackhole";
        }        
	#interface: {
	#	type = "socket";
	#	ip = "127.0.0.1";
	#	port = 9001;
	#}
    };

    postfiltered: {

        fS = 44100;
        hopSize = 512;
        nBits = 16;        

        #interface: {
        #    type = "file";
        #    path = "postfiltered.raw";
        #} 
	interface: {
		type = "blackhole";  }     

    };
}

classify:
{
    
    frameSize = 1024;
    winSize = 3;
    tauMin = 32;
    tauMax = 200;
    deltaTauMax = 7;
    alpha = 0.3;
    gamma = 0.05;
    phiMin = 0.15;
    r0 = 0.2;    

    category: {

        format = "undefined";

        interface: {
            type = "blackhole";
        }
    }
}

Is there a tutorial by which we can learn what different parameters are and how to control them?

I want to differentiate and track sound generated by human and not any other sound.
Is that possible through this application?

Lets assume its possible with current setup then how can I do that?
If its not possible, then I am thinking of training a model to predict if the input sound was human or not but then how should I input this data to to odas_ros so that it tracks the location of the sound source which was classified as human by the ML model.

Thanks
Gaurav

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.