Giter VIP home page Giter VIP logo

iopipe-java's Introduction

IOpipe Agent for JavaScript


Coverage Status npm version styled with prettier semantic-release

IOpipe is a serverless DevOps platform for organizations building event-driven architectures in AWS Lambda. IOpipe captures crucial, high-fidelity metrics for each Lambda function invocation. This data powers a flexibile and robust development and operations experience with features including tracing, profiling, custom metrics, and low-latency alerts. Get started today to quickly and confidently gain superior observability, identify issues, and discover anomalies in your connected applications.

Note: this library is a lower-level implementation than the package you might likely be looking for. Enjoy pre-bundled plugins like tracing and event info with @iopipe/iopipe

Installation

Install using your package manager of choice,

npm install @iopipe/core

or

yarn add @iopipe/core

If you are using the Serverless Framework to deploy your lambdas, check out our serverless plugin.

Usage

Configure the library with your project token (register for access), and it will automatically monitor and collect metrics from your applications running on AWS Lambda.

Example:

const iopipeLib = require('@iopipe/core');

const iopipe = iopipeLib({ token: 'PROJECT_TOKEN' });

exports.handler = iopipe((event, context) => {
  context.succeed('This is my serverless function!');
});

Custom metrics

You may add custom metrics to an invocation using context.iopipe.metric to add either string or numerical values. Keys have a maximum length of 256 characters, and string values are limited to 1024.

Example:

const iopipeLib = require('@iopipe/core');

const iopipe = iopipeLib({ token: 'PROJECT_TOKEN' });

exports.handler = iopipe((event, context) => {
  context.iopipe.metric('key', 'some-value');
  context.iopipe.metric('another-key', 42);
  context.succeed('This is my serverless function!');
});

Labels

You can label invocations using context.iopipe.label to label an invocation with a string value, with a limit of 128 characters.

Example:

const iopipeLib = require('@iopipe/core');

const iopipe = iopipeLib({ token: 'PROJECT_TOKEN' });

exports.handler = iopipe((event, context) => {
  context.iopipe.label('something-important-happened');
  context.succeed('This is my serverless function!');
});

Configuration

Methods

You can configure your iopipe setup through one or more different methods - that can be mixed, providing a config chain. The current methods are listed below, in order of precendence. The module instantiation object overrides all other config values (if values are provided).

  1. Module instantiation object
  2. IOPIPE_* environment variables
  3. An .iopiperc file
  4. An iopipe package.json entry
  5. An extends key referencing a config package
  6. Default values

Options

token (string: required)

If not supplied, the environment variable $IOPIPE_TOKEN will be used if present. Find your project token.

debug (bool: optional = false)

Debug mode will log all data sent to IOpipe servers to STDOUT. This is also a good way to evaluate the sort of data that IOpipe is receiving from your application. If not supplied, the environment variable $IOPIPE_DEBUG will be used if present.

const iopipe = require('@iopipe/core')({
  token: 'PROJECT_TOKEN',
  debug: true
});

exports.handler = iopipe((event, context, callback) => {
  // Do things here. We'll log info to STDOUT.
});

networkTimeout (int: optional = 5000)

The number of milliseconds IOpipe will wait while sending a report before timing out. If not supplied, the environment variable $IOPIPE_NETWORK_TIMEOUT will be used if present.

const iopipe = require('@iopipe/core')({ token: 'PROJECT_TOKEN', networkTimeout: 30000})

timeoutWindow (int: optional = 150)

By default, IOpipe will capture timeouts by exiting your function 150ms early from the AWS configured timeout, to allow time for reporting. You can disable this feature by setting timeoutWindow to 0 in your configuration. If not supplied, the environment variable $IOPIPE_TIMEOUT_WINDOW will be used if present.

const iopipe = require('@iopipe/core')({ token: 'PROJECT_TOKEN', timeoutWindow: 0})

plugins (array: optional)

Note that if you use the @iopipe/iopipe package, you get our recommended plugin set-up right away. Plugins can extend the functionality of IOpipe in ways that best work for you. Follow the guides for the plugins listed below for proper installation and usage on the @iopipe/core library:

Example:

const tracePlugin = require('@iopipe/trace');

const iopipe = require('@iopipe/core')({
  token: 'PROJECT_TOKEN',
  plugins: [tracePlugin()]
});

exports.handler = iopipe((event, context, callback) => {
  // Run your fn here
});

enabled (boolean: optional = True)

Conditionally enable/disable the agent. The environment variable $IOPIPE_ENABLED will also be checked.

url (string: optional)

Sets an alternative URL to use for the IOpipe collector. The environment variable $IOPIPE_COLLECTOR_URL will be used if present.

RC File Configuration

Not recommended for webpack/bundlers due to dynamic require.

You can configure iopipe via an .iopiperc RC file. An example of that is here. Config options are the same as the module instantiation object, except for plugins. Plugins should be an array containing mixed-type values. A plugin value can be a:

  • String that is the name of the plugin
  • Or an array with plugin name first, and plugin options second
{
  "token": "wow_token",
  "plugins": [
    "@iopipe/trace",
    ["@iopipe/profiler", {"enabled": true}]
  ]
}

IMPORTANT: You must install the plugins as dependencies for them to load properly in your environment.

package.json Configuration

Not recommended for webpack/bundlers due to dynamic require.

You can configure iopipe within a iopipe package.json entry. An example of that is here. Config options are the same as the module instantiation object, except for plugins. Plugins should be an array containing mixed-type values. A plugin value can be a:

  • String that is the name of the plugin
  • Or an array with plugin name first, and plugin options second
{
  "name": "my-great-package",
  "dependencies": {
    "@iopipe/trace": "^0.2.0",
    "@iopipe/profiler": "^0.1.0"
  },
  "iopipe": {
    "token": "wow_token",
    "plugins": [
      "@iopipe/trace",
      ["@iopipe/profiler", {"enabled": true}]
    ]
  }
}

IMPORTANT: You must install the plugins as dependencies for them to load properly in your environment.

Extends Configuration

Not recommended for webpack/bundlers due to dynamic require.

You can configure iopipe within a package.json or rc file by referencing a extends config package. An example of that is here. Config options are the same as the module instantiation object, except for plugins. Plugins should be an array containing mixed-type values. A plugin value can be a:

  • String that is the name of the plugin
  • Or an array with plugin name first, and plugin options second

For an example of a config package, check out @iopipe/config.

IMPORTANT: You must install the config package and plugins as dependencies for them to load properly in your environment.

License

Apache 2.0

iopipe-java's People

Contributors

adjohn avatar katiebayes avatar kolanos avatar nodebotanist avatar pselle avatar sullis avatar xerthesquirrel avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

iopipe-java's Issues

Implement Event Decoder for Testing

Something that would work better with the tests especially since there will probably be more tests is to have a decoder which reads in the input event and provides a OOP layout over it. This will simplify the test writing greatly and make it easier to work with.

Using IOpipe with Gradle

Gradle is the other build system that is used in Java and it does have interopability with Maven, so I think it would probably be useful for there to be an example/instructions on how to actually use it to build lambdas.

All invocations are reported as being on the same time.

All invocations which are executed are reported to have occurred at the same time

This does not have any affect on the reported data but the invocation times are the same as the start time, however when downloading profiler data in the staging interface it only lets you download the first invocation.

This is easily fixed.

Project URLs Need Update

Since the repository was renamed to iopipe-java a number of the files still refer to iopipe-java-core.

Duplicate log4j2 in `mvn test`

When running mvn test, log4j2 messages are duplicated and printed twice for every message that is output.

Assigning myself since this should be a simple fix.

Add Generic Handler Entry Point?

There could be an alternative means of invoking methods by telling AWS to enter via a generic handler such as perhaps com.iopipe.GenericRequestHandler. This would have the handleRequest() method and would additionally use an extra environment variable such as IOPIPE_GENERIC_REQUEST_HANDLER which points to another class and method and uses the same format as AWS's entry point. It basically would look up the class and perform the transformation as needed like as if it were called by AWS, it would be wrapped by the library, and then it would be invoked. This could be done for cases where there are a large number of requests and it might not be that feasible or it could be really annoying to setup wrappers for all of them (less boilerplate in user code).

Use a single background thread for CPU profile sampling.

Currently for each execution that is made a thread is spawned which samples the CPU usage of the threads which are running. However, when lots of invocations are happening at once when profiling is enabled, it would produce so much contention since all of them would be sampling threads that the code would run exponentially slower. Since AWS appears to only have two CPUs that can execute at once, this effectively means that profiling will be a bottleneck and will be more inaccurate/heavy the more invocations are being profiled. So my proposal is that instead a single service thread is ran which performs all the needed profiling and then any execution which wants to get profiling data just registers the threads it wants to profile as it builds a report. This should definitely improve things and make it easier to use and allow for profiling when many methods are being executed at one time.

Additionally, when writing the profiler I created a ThreadGroup and spawned a new thread so that the profiler could profile any new threads that were created. But spawning and running a new thread adds some overhead and it will be simpler and faster to just monitor the main thread.

So then to have a thread be profiled: have a call such as monitorThread(Thread) which tells plugins that support thread monitoring to monitor the given thread. This requires user code changes to specify that a thread should be monitored, but it means that they must decide if they want to monitor a thread rather than not monitoring it.

Add static means to obtain `IOpipeExecution` at any time from any running child thread.

Using a thread local variable a method could be added to IOpipeService to obtain the IOpipeExecution from any thread such as if it was lost for example.

Additonally if InheritableThreadLocal<IOpipeExecution> is used then newly created threads will be able to inherit this reference to IOpipeExecution without needing it to be passed.

On top of that, it can be used to handle recursive calls of IOpipeService and just no-op if it is being ran recursively. Could be used to prevent other handlers from generating measurements while the current call is being measured.

Find new home for JavaDocs

JavaDocs no longer live in the repository which means the README has to be updated and the documentation should be placed somewhere.

We might be able to use a service like https://www.javadoc.io/ to host our JavaDoc.

Use single watchdog monitoring thread.

A single watchdog thread could be used instead of spawning a watchdog thread for each invocation, which would potentially increase performance as there would be less threads that need to be run at once even though most of it is spent waiting for a timeout to occur.

Do not create a new Thread in a new ThreadGroup when performing an execution.

To handle the profiler profiling only a given execution I had to create a new ThreadGroup and spawn a new thread for every invocation that occurs. This creates some overhead so I am very much in the idea that this should be reversed and instead having a thread monitor of sorts or attachment. This should decrease latency and allow executions to perform faster. As a result the profiler will not have a good idea of which threads belong to a given execution if they are unflagged, but this might not be too much of an issue since the information might be wanted anyway.

This is related to #77.

CircleCI 1.0 Going Away

CircleCI has send out an email specifying that 1.0 builds will be removed in the future, I believe we should setters mine the best way to migrate to 2.0 when we can.

README Shields

GitHub puts the shields in the same part as the header and it might look cleaner to have them on the next line instead. Also in the README itself this would remove all the shield stuff being in the header line.

Also, CircleCI has a shield that can show the passing status which might be something to add to quickly see if everything is fine. They have information here https://circleci.com/docs/1.0/status-badges/ and they say one should create a new API token which is quite restrictive.

Add .label option

Allow users to "label" invocations (send a custom metric with only a name). This data goes into the labels array.

Use a lighter logging framework? (Should reduce coldstart times)

Log4j2 is massive at about 1000 classes in the core and the API itself has 188 classes. Since Log4j2 is initialized this means that the Java compiler will have to load all of the classes it needs and they also need to be included in the shaded JAR. So reducing the number of classes to load should reduce cold start times as an effect.

This though would be internal for the Java agent, users are free to use whatever logging framework they want.

I looked around and I saw https://tinylog.org/ which has about a few dozen classes. Although it does not support outputting to CloudWatch (neither does Log4j2, which needs an adapter provided by Amazon) this could probably just be supported as a basic extension.

Java Profiler

Add profiling plugin to repo, such that users can profile their Lambda code and download the profiles from IOpipe.

Background Thread for event upload.

After each execution (which generate events) events are uploaded to IOpipe synchronously. This means that a connection is made to the service, the event is uploaded, then it will return the result to whoever requested it. An issue with this that when multiple invocations are running at the same time they are likely to cause contention with each other as they upload events. This is compounded by the fact that the JVM will sometimes randomly just decide it wants to recompile some of the byte code to make it run faster or run garbage collection, which can increase latency by about 120x-250x. So during these events, there will be a build up of events waiting to be sent to IOpipe and the result of the executions returned to the user. So especially with regards to the user API calls this can create a stall for them getting their data during peak times.

I propose that this be made asynchronous to counter any slowdowns which are caused by JVM actions out of our control. Basically there would be a thread in the background which whenever there are events to be sent to IOpipe they will send it as they get them along with handling a back queue.

For non-concurrent invocations where only one single invocation is running at a time this should create little overhead and added latency. Due to the way AWS works there will always have to be at least one user invocation running in order to flush all of the events.

One thing which would need to be considered is the timeout when there is a backlog of the data, since the current execution will be terminated if it times out. So the flushing of the events should not take longer than the timeout period.

Use GitHub Pages for JavaDoc

I think what would be really nice would be if the JavaDoc were to be placed in GitHub Pages stored in the repository as its own branch (rather than as /docs because an automated script would be quite bothersome and mess with merging so much). That way all the documentation is available and can be browsed. There is no need to worry about formatting much as JavaDoc outputs HTML and such. As a sample the documentation pretty much looks like this: https://docs.oracle.com/javase/8/docs/api/.

Plugin enable/disable only considers case.

Checks for which plugins are enabled or disabled only consider case, since the event-info plugin has a hyphen in it and environment variables specified on the command line cannot have hyphens in them (just underscores) this means that those plugins cannot be enabled or disabled.

Logging vs printing to console

Reading the code, it seems that we print output to the console at times -- shouldn't we be using logging so that we/users can customize the log level? Ex. using log4j, suggested because that's what the default serverless Lambda project uses, and seems to be a norm in Java?

"qa-events" for Java

We have qa-events for other languages, we should have similar integration + qa tests for Java.

Inconsistency between environment variables for agents

Okay, so I discovered an inconsistency between the plugins such as the profiler plugin to enable them:

  • Java: IOPIPE_PROFILER_ENABLE
  • Python: IOPIPE_PROFILER_ENABLED
  • Javascript: IOPIPE_ENABLE_PROFILER

What would be the best way to handle this?

`NullPointerException` in statistics/profiler code.

This is a non-fatal exception, IOpipe still records and sends events but if this occurs then the profiler plugin does not report anything (that is even if profiling is enabled the dashboard for that event will just say that no profiler information is available). I have only seen this once but it appears that it is possible for the statistics part of the profiler to throw an exception. Since this is related to the thread statistics code I am going to guess that a thread was destroyed in the middle of gathering statistics for it which could be why this exception is thrown. I picked up this from seeing if the release plugin works

It seems like the pre-execution fails which then causes the post-execution to fail too when the thread statistic snapshot could not be created.

Logs and traces:

[INFO] 16:06:15.012 [main] DEBUG com.iopipe.Engine - >>> BEGIN TEST: mock-profilerplugin
[INFO] 16:06:15.013 [main] DEBUG com.iopipe.IOpipeService - Invoking context 6cd28fa7
[INFO] 16:06:15.024 [IOpipe-ProfilerGetURL] DEBUG com.iopipe.plugin.profiler.ProfilerExecution - Profiler URL: https://localhost/profiler
[INFO] 16:06:15.026 [IOpipe-ProfilerGetURL] DEBUG com.iopipe.plugin.profiler.ProfilerExecution - Got upload URL: https://localhost/profiler-result
[INFO] 16:06:15.026 [IOpipe-ProfilerGetURL] DEBUG com.iopipe.plugin.profiler.ProfilerExecution - Got access token: token
[INFO] 16:06:15.027 [IOpipe-ProfilerGetURL] DEBUG com.iopipe.plugin.profiler.ProfilerExecution - Signer sent: {result=201, type=application/json; charset=utf-8, body=109 bytes} {"signedRequest":"https://localhost/profiler-result", "jwtAccess":"token", "url":"http://localhost/snapshot"}
[INFO] 16:06:15.027 [main] ERROR com.iopipe.IOpipeService - Could not run pre-executable plugin.
[INFO] java.lang.NullPointerException: null
[INFO]  at com.iopipe.plugin.profiler.ThreadStatistics.snapshots(ThreadStatistics.java:156) ~[classes/:?]
[INFO]  at com.iopipe.plugin.profiler.ThreadStatistics.snapshots(ThreadStatistics.java:99) ~[classes/:?]
[INFO]  at com.iopipe.plugin.profiler.ManagementStatistics.snapshot(ManagementStatistics.java:129) ~[classes/:?]
[INFO]  at com.iopipe.plugin.profiler.ProfilerExecution.__pre(ProfilerExecution.java:447) ~[classes/:?]
[INFO]  at com.iopipe.plugin.profiler.ProfilerPlugin.preExecute(ProfilerPlugin.java:82) ~[classes/:?]
[INFO]  at com.iopipe.IOpipeExecution.plugin(IOpipeExecution.java:313) ~[classes/:?]
[INFO]  at com.iopipe.IOpipeService.run(IOpipeService.java:255) ~[classes/:?]
[INFO]  at com.iopipe.Engine.__run(Engine.java:176) ~[test-classes/:?]
[INFO]  at com.iopipe.Engine.lambda$generateTests$17(Engine.java:256) ~[test-classes/:?]

....................

[INFO] 16:06:16.755 [main] ERROR com.iopipe.IOpipeService - Could not run post-executable plugin.
[INFO] java.lang.NullPointerException: null
[INFO]  at com.iopipe.plugin.profiler.ProfilerExecution.__post(ProfilerExecution.java:187) ~[classes/:?]
[INFO]  at com.iopipe.plugin.profiler.ProfilerPlugin.postExecute(ProfilerPlugin.java:96) ~[classes/:?]
[INFO]  at com.iopipe.IOpipeExecution.plugin(IOpipeExecution.java:313) ~[classes/:?]
[INFO]  at com.iopipe.IOpipeService.run(IOpipeService.java:303) ~[classes/:?]
[INFO]  at com.iopipe.Engine.__run(Engine.java:176) ~[test-classes/:?]
[INFO]  at com.iopipe.Engine.lambda$generateTests$17(Engine.java:256) ~[test-classes/:?]

Possibility of having an overhead counter in the report?

There is some overhead to the Java agent and I would like a way to record the overhead in a report. Something that is at the top-level in the JSON. This would help in having a value that is a bit closer to the execution time Amazon reports especially during the initial cold start where it will be much longer.

Basically implementation wise it would count a timer from the earliest possible instant (at the entry of the service call) to the latest possible time (just before the JSON finishes generation).

Write Tutorial Documentation on optimizing lambdas.

After refactoring the profiler and such, write up a tutorial on using the profiler to optimize your code along with a bunch of things that can be done to make it better. Additionally include some examples which use progressively better code as part of the tutorial.

This should be done after #75, #77, #82, #84 as all of those will make the profiler snapshots and statistics more sane to base information from.

Improve the release process for Java

Okay! So we definitely need to improve and simplify the release process. Me and @pselle have been discussing this on Slack and I definitely feel that we both feel that the release process is too complicated. Some of the bad points:

  • Before the release we have to update the JavaDocs/Site and correct IOpipeConstants.
  • It is not automatic and requires that we run a script to release locally and with the proper environment variables.
  • The build had failed today and it created a tag which required CircleCI to clear the repository cache so CircleCI no longer saw that tag, we also were not able to release a different version because we have to release our SNAPSHOT version.

Ideally the best solution would be one of these:

  1. Tag the release where the build system picks that up and performs a release from it.
  2. Modify a single file or similar which the build system picks up and performs a release from it.

It should be done to make it work on CircleCI 2.0 since 1.0 is very soon near end of life (there is issue #38).

If a release cannot be made then it should back off and not try to modify repository again and push anything that cannot be reverted. Cancelled or failed releases should not cause the repository to get stuck like it did before with the tag issue.

I think for safety if we can, we can clone the repository to a temporary directory and perform the release work there if we make any commits. Then if the release worked then push the commits to master so that way the cached repository does not get messed up, it should check out hopefully. Maybe if there are multiple commits that would be made, maybe we can just have them squashed into a single commit.

I think one thing we can do automatically is that perhaps we can split off the JavaDocs and instead put that into another repository or a non-repository location (iopipe.com?). Essentially have a hook that when there is a new commit to master it runs the JavaDoc and site build and stores that. That way we do not need to run commits and the JavaDoc would always be up to date on master. If we want to go a step further the script could also mirror tags in the main repository for releases if that is important anyway. At least then we would not have to worry about running the JavaDoc ever and having that complicate pulling and the release process.

This message is already pretty long so I will put this down for now and add another reply following this.

Removal of dependency `javax.json`? (May reduce coldstart times)

The JSON library contains 92 classes and currently the core agent only uses one class significantly and a few others just to store some information. Removing this will mean that less classes have to be looked at and compiled within the JVM, so coldstart times should be improved slightly.

The testing framework will still use it naturally.

Build failure when running `javadoc`

Error:

[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 6.884 s
[INFO] Finished at: 2018-01-03T10:53:31-05:00
[INFO] Final Memory: 20M/249M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-javadoc-plugin:3.0.0:javadoc (default-cli) on project iopipe: An error has occurred in Javadoc report generation:
[ERROR] Exit code: 1 - /home/pam/IOpipe/java-agent/src/main/java/com/iopipe/IOPipeMeasurement.java:334: error: @param name not found
[ERROR] * @param __d The execution duration in nanoseconds.
[ERROR] ^
[ERROR] /home/pam/IOpipe/java-agent/src/main/java/com/iopipe/IOPipeMeasurement.java:337: warning: no @param for __ns
[ERROR] public void setDuration(long __ns)
[ERROR] ^
[ERROR] /home/pam/IOpipe/java-agent/src/main/java/com/iopipe/IOPipeService.java:24: error: type arguments not allowed here
[ERROR] * method {@link IOPipeService#run(Context, Supplier<R>)} may be called.
[ERROR] ^
[ERROR] /home/pam/IOpipe/java-agent/src/main/java/com/iopipe/IOPipeService.java:28: warning - Tag @link:illegal character: "60" in "IOPipeService#run(Context, Supplier<R>)"
[ERROR] /home/pam/IOpipe/java-agent/src/main/java/com/iopipe/IOPipeService.java:28: warning - Tag @link:illegal character: "62" in "IOPipeService#run(Context, Supplier<R>)"
[ERROR] /home/pam/IOpipe/java-agent/src/main/java/com/iopipe/IOPipeService.java:28: warning - Tag @link: can't find run(Context, Supplier<R>) in com.iopipe.IOPipeService
[ERROR] /home/pam/IOpipe/java-agent/src/main/java/com/iopipe/SimpleRequestStreamHandlerWrapper.java:27: error: invalid use of @return
[ERROR] * @return The return value.
[ERROR] ^
[ERROR] /home/pam/IOpipe/java-agent/src/main/java/com/iopipe/http/RemoteConnection.java:5: error: reference not found
[ERROR] * service. The server is sent {@link IOPipeHTTPRequest}s and the result of
[ERROR] ^
[ERROR] /home/pam/IOpipe/java-agent/src/main/java/com/iopipe/http/RemoteConnection.java:6: error: reference not found
[ERROR] * those requests are returned within a {@link IOPipeHTTPResult}.
[ERROR] ^
[ERROR] 
[ERROR] Command line was: /usr/lib/jvm/java-8-oracle/jre/../bin/javadoc @options @packages
[ERROR] 
[ERROR] Refer to the generated Javadoc files in '/home/pam/IOpipe/java-agent/target/site/apidocs' dir.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.