Giter VIP home page Giter VIP logo

testillano / h2agent Goto Github PK

View Code? Open in Web Editor NEW
10.0 10.0 0.0 1.67 MB

C++ HTTP/2 Mock Service which enables mocking HTTP/2 applications (also HTTP/1 supported).

License: Other

CMake 2.00% Dockerfile 0.16% C++ 80.95% Shell 5.93% Python 10.67% Smarty 0.16% Mustache 0.14%
benchmarks calculus component-testing cpp demos docker function-testing helm http1 http2 katas kubernetes load-testing mock prometheus-metrics proxy pytest rest-api restful-api tls-support

h2agent's People

Contributors

testillano avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

h2agent's Issues

Reserve initial memory for request body storage

Is your feature request related to a problem? Please describe.
Configurable pre reserve memory allocation for request body string container. This avoids possible reallocations done by std::string append() when the string object is constructed from scratch without initial memory reservation.

Describe the solution you'd like
This feature will be adapted from http2comm library.

Matching schema nonsense regarding query parameters ignore

Is your feature request related to a problem? Please describe.
Current matching schema is this one:

{
  "$id": "http://localhost:8074/admin/v1/server-matching/schema",
  "$schema": "http://json-schema.org/draft-07/schema#",
  "additionalProperties": false,
  "properties": {
    "algorithm": {
      "enum": [
        "FullMatching",
        "FullMatchingRegexReplace",
        "PriorityMatchingRegex"
      ],
      "type": "string"
    },
    "fmt": {
      "type": "string"
    },
    "rgx": {
      "type": "string"
    },
    "uriPathQueryParametersFilter": {
      "enum": [
        "SortAmpersand",
        "SortSemicolon",
        "PassBy",
        "Ignore"
      ],
      "type": "string"
    }
  },
  "required": [
    "algorithm"
  ],
  "type": "object"
}

But the uriPathQueryParametersFilter for "Ignore" has no real utility. It just ignore query parameters which are not passed to the transformation engine. So, if the provision try to access them (request.uri.param.whatever), they just are missing.
Also, the FullMatching algorithm is not valid when unpredictable query parameters are sent, so we should use FullMatchingRegexReplace or PriorityMatchingRegex.
The key here is that uriPathQueryParametersFilter should not have "Ignore" option (is not useful, just need to sort or passby), but algorithm should do.

Describe the solution you'd like
Remove Ignore from uriPathQueryParametersFilter, and add a new classification algorithm called FullMatchingIgnoreQueryParameters.

New source called "delete"

With this kind of source you could remove reponse body nodes:

  {
    "source": "delete",
    "target": "reponse.body.nodeToRemove"
  }

It could be delete, deletion, destroy, etc .

Prepare new bunch of features to cover agent client functionality

h2agent is not limited to mock/simulate servers.
Now that server funcionality is quite stable and completed, we will start to design client features.

This is an agent-oriented phylosophy, so, what we want is to remotely control the agent to tell about launching traffic to a server endpoint.
So, we will have similar provision to server-provision, but this time: admin/v1/client-provision
It could be something like this:

{
 "id": "firstFlow",  <------ client-provision key
 "endpoint": "my-server",
 "requestMethod": "POST",
 "requestUri": "/a/simple/path"
 "requestDelayMs": 20,
 "retries": 0,
 "timeoutMs": 2000,
 "recvId": "firstFlow.answered",
 }
 

POST /admin/v1/client-provision Body = json with definition. Overwritte is allowed
DELETE /admin/v1/client-provision deletes everything
GET /admin/v1/client-provision
PUT or HEAD to trigger them (commented below)

Also, we will configure endpoints (/admin/v1/client-endpoint) with something like:

{
 "id":"my-server",
 "scheme": "http",
 "address": "localhost",
 "port": "8000"
}

Those client endpoints are remote server addresses and will have their own "sendseq" in the same way that mock server "recvseq" existed.

Endpoints can be disabled ("access": "disabled") so, a provision pointing to it will do nothing when processed.
Another valid way to break testing is removing implicated endpoints as no request will be done if it is missing in that list:

POST /admin/v1/client-endpoint Body = json with definition. Those existing id's are ignored (no overwritte is allowed: warning), so must be deleted first to recreate.
DELETE /admin/v1/client-endpoint deletes everything (close connection and delete object in memory, for all the endpoints)
GET /admin/v1/client-endpoint

To trigger client-provisions we could have HEAD (or may be GET) requests:
With unordered map (better), we have mandatory "id" suffix in the path:
HEAD /admin/v1/client-provision/myTestClientFlow

sequence

Optional sequence seed: there is an optional sequence variable which could be passed to the executed provision.
This is of course used to launch traffic load in a range from a test case design based in the sequence value.
It creates an internal variable "sequence" iterating the values given in admin interface:

GET /admin/v1/client-provision/myTestClientFlow?initialSequence=1 (finalSeq defaults to initialSeq, and initialSeq = 0 if missing)

GET /admin/v1/client-provision/myTestClientFlow?initialSequence=555000000&finalSequence=555999999&rate=1&repeat=true
This launches a thread which enqueues 1 item per second (rate=1) passing the sequence to the queue together with the provision reference.
Repeat is used to cycle again and again when exhausted (false by default).

Each flow could update the params in any moment (sequences can be restarted), for example to change the rate dynamically. Also, sequence range can update the ongoing thread.
To stop: GET /admin/v1/client-provision/myTestClientFlow?rate=0

Those seed values could also be used to access a global variable map in this way:

  1. Create global variable json document:
    {
    "0": "555000000",
    "1": "555000001",
    ...
    }
    Load this global variables into the agent.

  2. Access global variable by name:

    "source": "globalVar.@{sequence}",
    "target": "var.subscriber"

You could use external files to load specific sequences based in the seed provided with any arbitrary value assigned to it.
Anyway, simple examples like former one don't really need to use global variables as you could build by appending:

"source": "@{sequence}",
"filter": { "Sum": 555000000 },
"target": "var.subscriber"

You can ignore sequence, and just use it to "repeat" an specific flow a number of times.
There is no default rate, so if not provided, single flow is executed (with the niitial sequence as sequence, perhaps 0), regardless repeat is set or not.

The client-provision structure would be similar to server-provision schema, where "transform" (a) is applied to prepare the request before send, normally used to adapt the request uri dynamically (except when it is fixed)

transform [
{"source": "@{sequence}", "filter": { "Sum": 555000000 }, "target": "var.subscriber"},
{ "source": "value.20", "target": "requestDelayMs" },
{ "source": "request.uri", "target": "var.requestUri" },
{ "source": "value.@{requestUri}/@{subscriber}", "target": "request.uri" },
{ "source": "value.1","target": "scheduleId.secondFlow"},
{ "source": "sendseq", "target": "request.body.json.string./clientsequence"},

...
]

requestBody can serve as template in the same way that responseBody served in server-provision.
Other ideas will come, but in general we will have a different set of needs than server-provision ones.

States

inState/outState evolve within a specific client provision to have a complete test case (script).
So, when response arrieve, we will change the state to trigger next provision/state.

Sequence is triggered by a parent "sender" so we could even make a trick to have complex tests which launch parallel
request by mean "module" operation. Imagine we want to send A,B,C without waiing for their responses, and the those ones evolve to A2,B2,C2 and A3,B3,C3. So, we could have a testSeq like:
"source": "math.@{sequence}%3",
"source": "var.testSeq"

Then we could build the phone number or id based in that testSeq. Then A outState would point to A2 and then A3. Same for B and C. The raw sequence would be used to perform A, B or C: 0, 3, 6, 9 ... for A (multiples of 3), 1, 4, 7, 10 ... for B (multiples f 3 plus 1), and for C, multiples of 3 plus 2.
Next group of three are managed as another test script.
So if you need a test script rate of 200 qps, then you will set 200*3 = 600 to have that virtual rate for the test script consuming 3 sequences.

High load crash on stateful purged provision

Describe the bug
Given a multiple state (reproduced with 5 states until purge) provision returning 204 without response body, after high load the process crashes.

To Reproduce
Reproduced with priority matching regexp and full matching regex replace algorithms. Some seconds to get the crash.
Seems that the states are the responsible. With single state does not happen.

Expected behavior
No crash. There is a workaround if possible: use fallback provision or avoid states.

Traffic validation with schemas

Is your feature request related to a problem? Please describe.
It is a problem becuase currently you provide a general schema and when your mock is simulating various URIs, those URIs could have different schemas. Joining them with OneOf or similar could have incorrect validations (swaping schema matching for the other URI)

Describe the solution you'd like
We should implement a schema map in this way:
MAP<URI prefix, schema document>
So, when receiving a request, you will retrieve the schema for the corresponding URI.
Also, response schemas should be implemented as part of provision content, because dynamic transformations could build an invalid response body. But this is less important, so we will implement only request schemas for this issue.

Improve transformation item traces

Is your feature request related to a problem? Please describe.
TRansformation item is traced on load, but it should be traced also during its processing.
Also, information shown should be summarized instead of being so detailed.

Specify target URI in foreign method transformation

As said in /kata/10.Foreign_States/solution.txt (LIMITATIONS), there is a gap for foreign states when foreign method manages a different kind of URI.
Probably we could do something like:

  {
    "source": "value.get-obtains-not-found",
    "target": "outState.GET./virtual/get/path"
  }

Deprecate PriorityMatchingRegex

Is your feature request related to a problem? Please describe.
PriorityMatchingRegex algorithm should be renamed to RegexMatching,
which is more intuitive and is named closely to FullMatching
and FullMatchingRegexReplace.

Describe the solution you'd like
Keep backward compatibility but show a warning trace when using old name.

Kata exercise 09 reassigned

Is your feature request related to a problem? Please describe.
Current kata exercise 09 (dynamic states) is very difficult. We replace it with a nice arithmetic server

Replace @{variable} in paths

Replace variables not only at "value.xx" form, but also for response object paths.
For example you could do the following:

    "transform": [
      {
        "source": "general.random.1.3",
        "target": "var.random"
      },
      {
        "source": "request.body.list/@{random}",
        "target": "response.body.string.listElement-@{random}"
      }
    ]

New admin operation to know the logging level

Is your feature request related to a problem? Please describe.
We have a PUT operation to set the new logging level of the process.
It would be nice to have a GET operation to get the current level.

Close-road outState to purge contexts

It would be good to have a special reserved out-state to indicate that the scenary is ended and you want to purge server-data relateed to it.
In this way long-term load tests will avoid memory consumption.

Degradation with high load and response delays

Describe the bug
There is probably a server degradation when delays are performed.

To Reproduce
$> cd st/hermes
$> H2AGENT__RESPONSE_DELAY_MS=20 HERMES__RPS=3000 HERMES__DURATION=7200 ./start.sh

Expected behavior
With high load (3000 ?, it depends on the machine), hermes timeouts (last column) could appear after some time.

** Suggested **
Add prometheus conunters to better follow this behavior

Bad requestNumber read on internal event source

When passing a negative request number to an event source, the sign is ignored when retrieving the data.

To Reproduce
kata/08.Server_Data_History is failing due to this (branch kata-solutions)

Hyper import failing since alpine 3.16 (latest on May 22) because of python 3.10 packaged

Describe the bug
Recent update for alpine base image (3.15->3.16) has broken hyper integration within CT docker image.
Seems that python is upgraded to version 3.10 and hyper must be imported in other way.

To Reproduce
As "build.sh --auto" uses latest images by default, you will have component test error on pytest startup if you create this
image in these days. Executing ./build.sh --ct-image, will prompt for base images and then you could provide 3.15 which is working perfectly.

Expected behavior
We would like to keep working on latest (3.16), so we must adapt conftest.py to change the way to import hyper library.

Manage static response bodies as cached json dumps

Is your feature request related to a problem? Please describe.
When a selected provision contains a json object for response body, and this response is not being modified within that provision, by any transformation item, we should respond a cached string which is a dump of that json object which never change.
Currently we are calling nlohmann::json dump() in every response, which is highly overkilling.

Describe the solution you'd like
Just, store cached dump representation as string for the static json response body when the provision is loaded in the process.
So, if no transformations are affecting it (this logic is ALREADY PRESENT !!), we will respond the cached data.

Boost asio coredump with highload when breaking client connection

Describe the bug
sigsegv fault
To dump the core, open c limits and set the desired core pattern path, for example:
$> ulimit -c unlimited
$> echo "/tmp/cores/core.%e-%p-%t-%h" | sudo tee -a /proc/sys/kernel/core_pattern

To Reproduce
The h2agent must be executed before, so build it and launch with warning traces:
$> build_type=Debug ./build.sh --auto
$> build/Debug/bin/h2agent --verbose

Launch a load over the EC (engineering capacity) and wait for Hermes to finish and close the connection.
For example in a 8-vcpu core machine:
$> cd st/hermes
$> H2AGENT__RESPONSE_DELAY_MS=0 HERMES__RPS=25000 HERMES__DURATION=10 ./start.sh

Expected behavior
When closing, a segmentation fault appears.
Probably linked to timeouts.

warning: core file may not match specified executable file.
[New LWP 7478]
[New LWP 7474]
[New LWP 7477]
[New LWP 7471]
[New LWP 7475]
[New LWP 7473]
[New LWP 7472]
[New LWP 7479]
[New LWP 7476]
Core was generated by `./h2agent.debug --verbose'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x00000000004c94db in nghttp2_session_recv (session=<optimized out>) at nghttp2_session.c:3283
3283    nghttp2_session.c: No such file or directory.
[Current thread is 1 (LWP 7478)]
(gdb) bt
#0  0x00000000004c94db in nghttp2_session_recv (session=<optimized out>) at nghttp2_session.c:3283
#1  0x0000000000000001 in ?? ()
#2  0x00007f59f7e3f760 in ?? ()
#3  0x00007f59f7e3f760 in ?? ()
#4  0x00007f59f7d493c0 in ?? ()
#5  0x00000000004cf3a5 in nghttp2_http_on_header (session=
    0x718aeb <std::num_get<wchar_t, std::istreambuf_iterator<wchar_t, std::char_traits<wchar_t> > >::_M_extract_int<unsigned long long>(std::istreambuf_iterator<wchar_t, std::char_traits<wchar_t> >, std::istreambuf_iterator<wchar_t, std::char_traits<wchar_t> >, std::ios_base&, std::_Ios_Iostate&, unsigned long long&) const+1169>, 
    stream=0x717098 <std::num_get<wchar_t, std::istreambuf_iterator<wchar_t, std::char_traits<wchar_t> > >::do_get(std::istreambuf_iterator<wchar_t, std::char_traits<wchar_t> >, std::istreambuf_iterator<wchar_t, std::char_traits<wchar_t> >, std::ios_base&, std::_Ios_Iostate&, bool&) const+554>, frame=0x7f59f7d489f0, nv=0x7f59f7d48640, trailer=-136766664) at nghttp2_http.c:246
#6  0x0000000000000000 in ?? ()

Source helpers automatically at h2agent image

Is your feature request related to a problem? Please describe.
It is some unconfortable to have to source helpers everytime we go into an h2agent image.
This should be done automatically.

Describe the solution you'd like
Install bash within the process image, and add helpers to /etc/profile.d directory which is sourced on shell execution.

Manage not only json but arrays and string content

The agent is highly oriented to manage json transport, but not ready for string content or arrays.

The solution implies to extend the provision schema:

"responseBody": {
     "oneOf": [  
        {"type": "object"}, 
        {"type": "string"}, 
        {"type": "array"},
     ]
}

And manage the proper content through source, transformations and targets.
Documentation must warn about limitations depending on the source received and target desired.

Request body string (not json) is not propagated to response body as string

Describe the bug
Considering this provision:

{                                                                               
  "requestMethod": "POST",                                                      
  "requestUri": "/the/uri",                                            
  "responseCode": 200,                                                                                                                                                                                            
  "transform": [                                                                
    {                                                                           
      "source": "request.body",                                                 
      "target": "response.body.string"                                          
    }                                                                  
  ]                                                                             
}   

Then, requesting:
curl -i --http2-prior-knowledge http://127.0.0.1:8000/the/uri -d 'hello'

we obtain "null" instead of "hello":

HTTP/2 200 
date: Sun, 05 Jun 2022 18:34:59 GMT

null

Expected behavior
We should obtain the request body string.

Additional context
The problem is due to try to interpret as json.
Content-type should be tested to assume the source type.

Refactor docker tag management

Is your feature request related to a problem? Please describe.
These following files ase "latest" as default tag, but this breaks CI when need to build an older version.
The current WA is get the http2comm version from build-native.sh, but this is not documented:

.github/workflows/ci.yml
Dockerfile
Dockerfile.build
Dockerfile.coverage
Dockerfile.training
README.md
build.sh
ct/Dockerfile
ct/src/conftest.py
ct/test.sh
helm/ct-h2agent/values.yaml
helm/h2agent/values.yaml
tools/coverage.sh
tools/training.sh

Describe the solution you'd like
For example, base_tag defaults to latest in build.sh, regardless if this is related to h2agent, h2agent_builder, http2comm library, etc.
We have to distinguish, and create a versions.src file with all the versions. Also workflow must be reviewed, and all other files commented above.

Measure events timestamp in microseconds

Is your feature request related to a problem? Please describe.
Json reports for data events show the reception timestamp in milliseconds.
As http2 library uses microseconds to store latencies, we should do the same.
Indeed, milliseconds seems to be bad resolution for high load events.

Describe the solution you'd like
Get reception microseconds from http2comm library and uses it to store in event instead of gathering the timestamp again.
Also, the time is badly measured now because it is done when storing the event, after transformation force.

Refactor MockServer event classes to unify loader interface

Is your feature request related to a problem? Please describe.
MockServerKeyEvent, MockServerKeyEvents and MockServerEventsData classes have similar interface for load/loadRequest.

Describe the solution you'd like
Those interface parameters should be joined in a common structure or similar.

Avoid _exit override

Describe the bug
The function _exit() is unsafe to be declared on main.cpp.
This could have collision with _Exit in some architectures, giving a dangerous infinite loop on temination.

To Reproduce
Reproduced on native build, then, start the process and interrupt (CTRL-C). SIGINT handler goes to _exit() having a infinite loop and then crashdump.

Expected behavior
Avoid collision
The function will be renamed to 'myExit()'

Abstract mock server request names

We will rename the "mock server request" concept to "event".
Also, items (event and list of events aka history) are renamed including "key" as prefix to know that this is the key element within the global server data map:

MockServerRequest -> MockServerKeyEvent
MockServerRequests -> MockServerKeyEvents
MockServerRequestsData -> MockServerEventsData

So, we will have the single item which is an event, happened for a received method and URI, aka "MockServerKeyEvent", then we have ahistory of events for the same key = method/uri, thats the MockServerKeyEvents class. Finally, we could have events for different keys: MockServerEventsData class, which is the global server data storage information.

Also, we will rename the key typedef and a private member name:

mock_server_requests_key_t -> mock_server_events_key_t
mock_server_request_data_ -> mock_server_events_data_

Add hints about kubectl port-forwarding towards CT deployment

Is your feature request related to a problem? Please describe.
In order to ease testing against CT deployment, we should show kubectl proxying procedure.

Describe the solution you'd like
At the end of ct/test.sh, if everything is OK, we should show a hint message with thekubectl port-forward commands needed to access CT services.

Describe alternatives you've considered
We could add to helm chart a NOTES.txt but this would be hidden after CT execution. Indeed this is not actually required as the project could be executed natively. The only thing that could have interest is to do ST testing with st/start.sh, becuase we could see metrics through grafana deployment (tools/play_grafana.sh).

Native build script

Is your feature request related to a problem? Please describe.
Create script to build the project natively

Describe the solution you'd like
Script build-native.sh with hardcoded fixed versions.
Crentralize with versions used in build.sh script as much as possible.

Additional context
This is useful to prepare sonar CI stage.

Merge response body template with response body object target transformations

Is your feature request related to a problem? Please describe.
It is more a limitation.
When our provision has a response body object template (json document), it will be overwritten with further transformations where another object is located as reponse.body.object.

Describe the solution you'd like
I would like to merge always by default both objcets, because, at least, you always could remove the nodes (eraser) taht you don't want in any moment.

Improve valgrind helper script

Is your feature request related to a problem? Please describe.
Improve valgrind script

Describe the solution you'd like
Provide hints not only for memcheck but for other valgrind tools (callgrind, helgrind, massif)

Implement Arash Partow math library interpreter as a new source of data

Is your feature request related to a problem? Please describe.
h2agent have poor math cover. Just Sum and Multiply, which gives substraction and division, but:
We would like to parse advanced algebraic expressions to calculate complex formulas and operations.

Describe the solution you'd like
Github Arash Partow project "exprtk" fits the need properly.
Some benchmarking done is promising and it is a very easy library to integrate.

Improve astyle procedure

Is your feature request related to a problem? Please describe.
Current astyle format is not good with some macro definitions.
For example, LOGXXX ones, are not correctly formated/indented.

Describe the solution you'd like
Find an alternative astyle method. Not talking about configuration but even different image or tool to proceed

Minor improvement: extractQueryParameters

Is your feature request related to a problem? Please describe.
extractQueryParameters() should be called only when needed: when query parameters are received
This avoid the empty qmap creation for every reception.

Multipart content-type support

Is your feature request related to a problem? Please describe.
Currently json and readable string are decoded.
When another mime type arrives, internal data shows the nlohmann parse error as responseBody content.

Describe the solution you'd like
Normally, binary data is not going to be required from transformations, because it is very complex to manage a way to do this.
Same for multipart or other types. So, it would be interesting to have a class representation which translates data received to json representation. For example, if we receive binary, we could put:
"requestBody": {
"binary": ""
}

Same for UTF8 readable formats or similars.
In the case of multipart, we could add fields to the requestBody object, named as the content type itself. For example,
if we have json and vnd.whatever, we could represent:

"requestBody": {
"application/json": { ..... object ... },
"vnd.whatever": "af043928fe8d79cca930"
}

Better:
A normal content-type will be represented as usual with requestBody and requestHeaders, where the first is the body content itself. With multipart, we will decode over requestBody, and requestHeaders will be multipart/xxx with boundary.
requestBody will have again, content and headers, for each multipart:
"requestBody": {
"part.1": {
"content": { .... },
"headers": { .... }
},
"part.2": {
"content": { .... },
"headers": { .... }
},
...
}

If we have inner header "Content-Type" as application/json, then "content" will be json.
For other types, we will represent: readable / non-readable. The second one in hex string.

Additional context
Consider using this github library:
https://github.com/iafonov/multipart-parser-c
or
https://github.com/FooBarWidget/multipart-parser

Add informational trace to monitor worker threads

Is your feature request related to a problem? Please describe.
Improvement to know the number of used worker threads.

Describe the solution you'd like
Informational trace every 1000 requests is enough to avoid collapsing the terminal and
monitor the variation of worker threads.

Describe alternatives you've considered
An administrative operation to get the information, but monitoring would be more difficult and
REST api would be dirty.

Additional context
This is part of nghttp2 bottleneck investigation.

Improve server matching configuration

Is your feature request related to a problem? Please describe.
Query parameters separator (normally &, but in rare/old systems ; could be used), is coupled to the QP filter.
So you have SortAmpersand, SortSemicolon, PassBy and Ignore, but you don't have PassBy and Ignore for Semicolon (it is said as a assumed limitation as semicolon separator is rare).

Describe the solution you'd like
It should be good to decouple the concepts: separator and filter, to allow combination.
It should be good to modify the matching configuration schema to have:
uriPathQueryParameters { "separator", "filter" }

This change is not backward compatible, so, we should move to 2.5.0.

Performance optimizations

Is your feature request related to a problem? Please describe.
Adapt to http2comm v1.2.0 version.
testillano/http2comm#2

Describe the solution you'd like
Adapt h2agent interfaces to the changes done in http2comm v1.2.0 library.
We will extract str() on demand (or when storage of data is selected).

Improve ct script

Is your feature request related to a problem? Please describe.
Improve the ease of use regarding prepend variables.

Describe the solution you'd like
Simplify the script removing things like the upgrade which is never used and takes too much time.
Make this more intuitive with an action parameter (deploy, test, etc.)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.