Giter VIP home page Giter VIP logo

longtasks's Introduction

Long Task API

Long Tasks is a new real user measurement (RUM) performance API to enable applications to measure responsiveness. It enables detecting presence of “long tasks” that monopolize the UI thread for extended periods of time and block other critical tasks from being executed - e.g. reacting to user input.

Background

As the page is loading and while the user is interacting with the page afterwards, both the application and browser queue various events that are then executed by the browser -- e.g. the user agent schedules input events based on user’s activity, the application schedules callbacks for requestAnimationFrame and other callbacks etc. Once in the queue, these events are then dequeued one-by-one by the browser and executed — see “the anatomy of a frame” for a high-level overview of this process in Blink.

However, some tasks can take a long time (multiple frames), and if and when that happens, the UI thread is locked and all other tasks are blocked as well. To the user this is commonly visible as a “locked up” page where the browser is unable to respond to user input; this is a major source of bad user experience on the web today:

  • Delayed “time to Interactive”: while the page is loading long tasks often tie up the main thread and prevent the user from interacting with the page even though the page is visually rendered. Poorly designed third-party content is a frequent culprit.
  • High/variable input latency: critical user interaction events (tap, click, scroll, wheel, etc) are queued behind long tasks, which yields janky and unpredictable user experience.
  • High/variable event handling latency: similar to input, but for processing event callbacks (e.g. onload events, and so on), which delay application updates.
  • Janky animations and scrolling: some animation and scrolling interactions require coordination between compositor and main threads; if the main thread is blocked due to a long task, it can affect responsiveness of animations and scrolling.

Some applications (and RUM vendors) are already attempting to identify and track cases where “long tasks” happen. For example, one known pattern is to install a ~short periodic timer and inspect the elapsed time between the successive calls: if the elapsed time is greater than the timer period, then there is high likelihood that one or more long tasks have delayed execution of the timer. This mostly works, but it has several bad performance implications: the application is polling to detect long tasks, which prevents quiescence and long idle blocks (see requestIdleCallback); it’s bad for battery life; and there is no way to know what caused the delay. (e.g. first party vs third party code)

RAIL performance model suggests that applications should respond in under 100ms to user input; for touch move and scrolling in under 16ms. Our goal with this API is to surface notifications about tasks that may prevent the application from hitting these targets.

Terminology

Major terms:

  • frame or frame context refers to the browsing context, such as iframe (not animation frame), embed or object
  • culprit frame refers to the frame or container (iframe, object, embed etc) that is being implicated for the long task
  • attribution refers to identifying the type of work (such as script, layout etc.) that contributed significantly to the long task AND which browsing context or frame is responsible for that work.
  • minimal frame attribution refers to the browsing context or frame that is being implicated overall for the long task

V1 API

Long Task API introduces a new PerformanceEntry object, which will report instances of long tasks:

interface PerformanceLongTaskTiming : PerformanceEntry {
  [SameObject, SaveSameObject] readonly attribute FrozenArray<TaskAttributionTiming> attribution;
};

Attribute definitions of PerformanceLongTaskTiming:

  • entryType: "longtask"

  • startTime: DOMHighResTimeStamp of when long task started

  • duration: elapsed time (as DOMHighResTimeStamp) between start and finish of task

  • name: minimal frame attribution, eg. "same-origin", "cross-origin", "unknown" etc. Possible values are:

  • "self"

  • "same-origin-ancestor"

  • "same-origin-descendant"

  • "same-origin"

  • "cross-origin-ancestor"

  • "cross-origin-descendant"

  • "cross-origin-unreachable"

  • "multiple-contexts"

  • "unknown"

  • attribution: sequence of TaskAttributionTiming, a new PerformanceEntry object to report attribution within long tasks. To see how attribute is populated for different values of name see the section below: Pointing to the culprit

interface TaskAttributionTiming : PerformanceEntry {
  readonly attribute DOMString containerType;
  readonly attribute DOMString containerSrc;
  readonly attribute DOMString containerId;
  readonly attribute DOMString containerName;
};

Attribute definitions of TaskAttributionTiming:

  • entryType: “taskattribution”
  • startTime: 0
  • duration: 0
  • name: type of attribution, eg. "script" or "layout"
  • containerType: type of container for culprit frame eg. "iframe" (most common), "embed", "object".
  • containerName: DOMString, container’s name attribute
  • containerId: DOMString, container’s id attribute
  • containerSrc: DOMString, container’s src attribute

Long tasks events will be delivered to all observers (in frames within the page or tab) regardless of which frame was responsible for the long task. The goal is to allow all pages on the web to know if and who (first party content or third party content) is causing disruptions.

The name field provides minimal frame attribution so that the observing frame can respond to the issue in the proper way. In addition, the attribution field provides further insight into the type of work (script, layout etc) that caused the long task as well as which frame is responsible for that work. For more details on how the attribution is set, see the "Pointing to the culprit" section.

The above covers existing use cases found in the wild, enables document-level attribution, and eliminates the negative performance implications mentioned earlier. To receive these notifications, the application can subscribe to them via PerformanceObserver interface:

const observer = new PerformanceObserver(function(list) {
  for (const entry of list.getEntries()) {
     // Process long task notifications:
     // report back for analytics and monitoring
     // ...
  }
});


// Register observer for long task notifications.
// Since the "buffered" flag is set, longtasks that already occurred are received.
observer.observe({type: "longtask", buffered: true});

// Long script execution after this will result in queueing 
// and receiving “longtask” entries in the observer.

Long-task threshold is 50ms. That is, the UA should emit long-task events whenever it detects tasks whose execution time exceeds >50ms.

Demo

For a quick demo, visit this website on a browser which supports the Long Tasks API.

For a demo of long tasks from same-origin & cross-origin frames, see this website. Interacting with the iframed wikipedia page will generate cross-origin long task notifications.

Pointing to the culprit

Long task represents the top level event loop task. Within this task, different types of work (such as script, layout, style etc) may be done, and they could be executed within different frame contexts. The type of work could also be global in nature such as a long GC that is process or frame-tree wide.

Thus pointing to the culprit has couple of facets:

  • Pointing to the overall frame to blame for the long task on the whole: this is refered to as "minimal frame attribution" and is captured in the name field
  • Pointing to the type of work involved in the task, and its associated frame context: this is captured in TaskAttributionTiming objects in the attribution field of PerformanceLongTaskTiming

Therefore, name and attribution fields on PerformanceLongTaskTiming together paint the picture for where the blame rests for a long task.

The security model of the web means that sometimes a long task will happen in an iframe that is unreachable from the observing frame. For instance, a long task might happen in a deeply nested iframe that is different from my origin. Or similarly, I might be an iframe doubly embedded in a document, and a long task will happen in the top-level browsing context. In the web security model, I can know from which direction the issue came, one of my ancestors or descendants, but to preserve the frame origin model, we must be careful about pointing to the specific container or frame.

Currently the TaskAttributionTiming entry in attribution is populated with "script" work (in the future layout, style etc will be added). The container or frame implicated in attribution should match up with the name as follows:

value of name frame implicated in attribution
self empty
same-origin-ancestor same-origin culprit frame
same-origin-descendant same-origin culprit frame
same-origin same-origin culprit frame
cross-origin-ancestor empty
cross-origin-descendant first cross-origin child frame between my own frame and culprit frame
cross-origin-unreachable empty
multiple-contexts empty
unknown empty

Privacy & Security

Long Tasks API surfaces long tasks greater than a threshold (50ms) to developers via Javascript (Performance Observer API). It includes origin-safe attribution information about the source of the long task. There is a 50ms threshold for long tasks. Together this provides adequate protection against security attacks against browser.

However, privacy related attacks are possible, while the API doesn’t introduce any new privacy attacks, it could expedite existing privacy attacks. If this were to become an concern, additional mitigations can be implemented to address this such as dropping "culprit" after a per-target origin threshold is exceeded, or limiting to 10 origins per minute etc.

Detailed Security & Privacy doc is here: https://docs.google.com/document/d/1tIMI1gau_q6X5EBnjDNiFS5NWV9cpYJ5KKA7xPd3VB8/edit#

V2 API Sketch

See: https://docs.google.com/document/d/125d69JAC7nyx-Ob0a9Z31d1uHUGu4myYQ3os9EnGfdU/edit

Alternatives Considered

Why not just show sub-tasks vs. top-level tasks with attribution?

This API will show toplevel long tasks along with attribution for specific sub-tasks which were problematic. For instance, within a 50ms toplevel task, sub-tasks such as a 20ms script execution or a 30ms style & layout update -- will be attributed. This raises the question -- why show the toplevel task at all? Why not only show long sub-tasks such as script, style & layout etc that are directly actionable by the user? The top level task may contain some un-attributable segments such as browser work eg. GC or browser events etc.

The rationale here is that showing the toplevel task is good for web developers, even though they will actively consume the actionable sub-tasks such as long scripts and act on them. Over time the sub-task attribution will keep expanding, making more of the long task actionable. Showing the top-level task gives developers a direct indication of main thread busy-ness, and since this directly impacts the user experience, it is appropriate for them to know about it as a problem signal -- even if they cannot have complete visibility or full actionability for the entire length of the long task. In many cases the developers may be able to repro in lab or locally and glean additional insights and get to the root cause. Long tasks provide context to long sub-tasks, for instance, a 20ms style and layout or a 25ms script execution may not be terrible by themselves, but if they happen consecutively (eg. script started from rAF) and cause a long 50ms task, then this is a problem for user responsiveness.

longtasks's People

Contributors

addyosmani avatar caribouw3 avatar dbaron avatar domenic avatar hawkinsw avatar igrigorik avatar mmocny avatar noamr avatar npm1 avatar panickers avatar paulirish avatar plehegar avatar progers avatar saschanaz avatar siusin avatar spanicker avatar tabatkins avatar tdresser avatar yoavweiss avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

longtasks's Issues

Generic model for causal attribution (non-sampling based)

Several issues and previous design documents have demonstrated a need for developers to be able to identify work in order to fix the issue. Proposals thusfar have mostly focused on what script is currently being executed as opposed to what script is ultimately responsible for the long task occurring in the first place. I'd like to propose a generic model for attribution based on causality instead of sampling.

I discussed this proposal and the difference between these approaches at a WebPerfWG 2020 TPAC session. Recording Slides

Brief Summary of Benefits:

  • Provides unique insight not already available via the JS Sampling API.
  • More intuitive starting point for developers to investigate on pages with varied authorship (easily identifies third-party sources).
  • Does not require heavy _intra_task bookkeeping.
  • Proven track record for matching developer intuition in the Lighthouse project.

Very Rough Implementation Description:

Terminology:

  • initiating invocation: a specific invocation of a web API that schedules a new task

    • Examples:
      • setTimeout
      • fetch
      • addEventListener
  • causal task: the task that was ultimately responsible for another task's existence

  • The causal task of any given task is the result of traversing the tree of initiating invocations until a task is reached that was the initial evaluation of a script resource with a URL.

    • Generate a numeric identifier for each main-thread task and initiating invocation
    • Maintain a map of initiating invocation ID to the causal task ID
    • Upon future tasks scheduled as a result of an initiating invocation, associate any new initiating invocations with the same causal task ID as the current task.

image

Questions

How can the Lighthouse project or me personally help support this effort? :)

Don't talk about "frames"

This spec defines

Frame or frame context refers to the browsing context, such as iframe (not animation frame), embed or object in which some work (such as script or layout) occurs.

But never officially refers to these terms. It does say "frame" and "frame tree" a lot though.

But, it's a bad idea to bring this Blink-specific terminology into the spec land. Just use "browsing context".

"multiple-contexts" doesn't seem useful

Since it's an information leak to disclose whether a cross origin iframe contained another iframe or not, this would only apply to the same origin iframe. If that were the case, the script can simply look at attribution and infer that there were multiple frames involved.

Should PerformanceObserver.observe(... buffered=true}) work for LongTasks?

In the observe() description[1], when the buffered flag is set, the performance buffer entries are added to the observer buffer. There are no longtask entries in the performance buffer, but I think that it could be useful to expose previous longtask entries when the buffered flag is on. Any thoughts on this? This would require some spec change here and/or in Performance Timeline.

[1] https://w3c.github.io/performance-timeline/#observe-method

Names are inconsistent

same-origin vs. cross-origin-unreachable: why do only one of these have -unreachable?

multiple-contexts: this is the only one that uses the "-contexts" suffix. Can it just be "multiple"?

I can understand if the answer is "let's not change this because we shipped already". In that case we may want to add a note explaining that the inconsistency is historical, not intentional.

Specifying the attributes of each object more rigorously

Each object (TaskAttributionTiming, PerformanceLongTaskTiming) needs to specify getter algorithms for each of its attributes, including inherited attributes. Some of these will be constant:

(PerformanceLongTaskTiming) The entryType attribute's getter must return "longtask"

(TaskAttributionTiming) The startTime attribute's getter must return 0

(Note the addition of 's getter which is not in the current spec. Just a nit.)


However for the non-constant ones, you need to be a bit more careful, since you need the processing model to "set" them. This is a bit strange because they only have getters (they are readonly). If you want to be rigorous, you end up defining this like so:

Each TaskAttributionTiming has an associated frame name, frame ID, and frame src.

...

The frameName attribute's getter must return this TaskAttributionTiming's associated frame name.

...

(in the processing model) Set attribution's associated frame name to the value of iframe's name content attribute, or null if the attribute is absent.

That is, you have the "private variable" of "associated frame name" which the public frameName attribute's getter returns.


There is a less rigorous version which you can get away with without much fuss, which is this:

The frameName attribute's getter must return the value it is initialized with during construction.

...

(in the processing model) Initialize attribution's frameName attribute to the value of iframe's name content attribute, or null if the attribute is absent.

I think I would probably recommend this less rigorous version for this spec.


The main thing to avoid is having "must" algorithms which are meaningless. For example, the current spec says

The frameName attribute must return DOMString with culprit iframe's name attribute.

That's not helpful since we have no idea how to determine the culprit. How do you implement that getter? It turns out the answer is in the processing model section. That reality is better reflected by one of the above two formulations. You can keep the helpful content here as a description, but not as a MUST algorithm. Example:

The frameName attribute's getter must return the value it is initialized with during construction. This will be derived from the culprit's iframe's name attribute.

Expose sampled call stacks

Can the api tell us a little more specific information about what snippet of code which is causing the long task ? May be the A call stack ?

Currently the containerSrc attribute is helpful if an iframe is causing the issue.(The culprit could be one of the many js files loaded to that iframe).

Top-level browsing context scoping

Why does this use top-level browsing context scoping? If A nests B and we run the event loop for B, we end up passing A's browsing context to the Long Tasks API from HTML's event loop algorithm. This seems rather weird. (And that in turn will do something with both A and B.)

Tracking microtasks?

Nice explainer! One thing I'm wondering, though, is whether it would also make sense to track microtasks. In the case of MutationObserver or Promises, it's possible to jank the main thread with 50 microtasks that each take 1ms, thus exceeding the 50ms threshold but not showing up as a "long task" because it was 50 separate operations. This would be a way for e.g. third-party ad networks to jank the main thread but still fly under the radar, undetected by the proposed API.

Obviously tracking microtasks would make it much more difficult to give proper attribution, but it seems like something worth considering. Even ignoring the possibility of bad actors trying to defeat the system, Promises are in pretty widespread use these days, so a lot of jank is happening that would go unmeasured if we didn't take microtasks into account.

PerformanceTimeline support

Right now, I think this is the only spec that has PerformanceObserver support but not PerformanceTimeline support (see w3c/performance-timeline#78).

Is that intentional? Could we get this data in the Timeline, possibly with a buffer like ResourceTiming/ServerTiming do?

remove index.html in the master branch

deploy.sh will automatically generate a new index.html and deploy to the gh-pages branch... Shall we remove the index.html in the master branch to avoid inconsistency?

Processing model Google doc is not public

I think it is restricted to chromium.org. Maybe that is intentional for now and you were planning to open it up later, in which case let's call this a tracking issue for remembering to do that :)

Better understand correlating long tasks from different iframes

[cover bug for exploration, not sure if this is a problem yet]
Understand how long tasks from different iframes could be correlated on a timeline.
Long tasks are reported relative to the observing document's time-origin, so it's hard for multiple cross-origin frames to collaborate eg. if the parent / host page and Ads in separate frames.
Is this problematic?

The host page and the Ads iframe will both observe the same long task but will receive separate start-times (relative to their own document time-origin). The host page cannot see detailed attribution but the Ad iframe can see this, and may want to relay more details to the host page.

Needs toJSON defined

Today, Chrome does not return the collection when toJSON is called. This is correct per the spec, but I believe is unexpected. I believe the toJSON should include the collection.

Editorial issues with the processing model

Lots of minor things, none super-important, but good to fix. Maybe after switching to Bikeshed.

  • All variables should be marked up as such.
    • <var>settings</var>, or in Bikeshed, |settings|.
  • All curly quotes need to become straight quotes.
  • Event loop definitions: the associated things should be <dfn>s, not <i>s.
    • In Bikeshed you'd do <dfn for="task">start time</dfn>
  • Every reference to those asssociated things should be a link
    • In Bikeshed you'd do either <a for="task">start time</a> or [=task/start time=]
  • Event loop processing model: Performance object and its now() method should be linked and monospaced
    • In Bikeshed that'd be {{Performance}} and {{Performance/now()}}
  • Event loop processing model: "the report long tasks algorithm" should be linked and "(below)" should be removed
    • In Bikeshed this would be accomplished by changing <h4>Report Long Tasks</h4> to <h4 dfn>Report Long Tasks</h4>. Then you can do <a>report long tasks</a>.
  • Calling scripts: "prepare to run script" should link
    • Just <a>prepare to run script</a> in Bikeshed
  • I would move "Report long tasks" up a level; no need to nest it under "Additions to the Long Task Spec". I would also not capitalize it as such, just "Report long tasks"
  • Report long tasks needs to state that it takes as input a task task, and step 1 needs to talk about task's end/start time, not just the bare concepts of end/start time.
  • Report long tasks concepts that need to be linked (just <a>concept</a> in Bikeshed should usually work):
    • start time/end time/script evaluation environment settings object set, as above
    • environment settings object
    • responsible browsing context
    • top-level browsing context
    • relevant Realm (for="global object", so either <a for="global object">relevant Realm</a> or [=global object/relevant Realm=])
    • active document
    • list of descendant browsing contexts
    • relevant settings object (for="Realm")
    • origin
    • same origin
    • ancestor (for="browsing context")
    • iframe (Bikeshed: <{iframe}>)
    • all IDL attributes (Bikeshed: {{TaskAttributionTiming/name}} and such)
    • all content attributes (Bikeshed: <{iframe/name}>)
  • Report long tasks NOTEs need to become real notes.
    • In Bikeshed this will happen automatically to any text prefixed with NOTE:)
  • Security and Privacy Considerations
    • The paragraph breaking got weird; there's a <br> where there should be a </p><p>

Handle longtask attribution for embed & object in addition to iframe

tl;dr: In the current implementation TaskAttribution is populated with frameId, frameName, frameSrc etc pointing to the culprit iframe -- if culprit document frame is an iframe. However we should also handle the case where the culprit is an embed or object.

Details:
"attribution" field is a set of "TaskAttributionType" entries - these are intended to contain things related to type of work eg. scriptUrl / src code location for script etc.
This section in the explainer provides relevant context here for the purpose & intention of "TaskAttributionType".
We added some fields in here pointing to the "culprit frame" (orthogonal to type of work) in which the script / layout executed. If the culprit is an iframe then "frameId", "frameName" & "frameSrc" are populated.

Option #1:
keep current flat structure in TaskAttributionTiming and add additional attributes such as:

  • "containerType": depending on the culprit this would be set to: "frame", "object" or "embed"
  • If the above is "frame" then populate frameId, frameName, frameSrc as before; else if "object" then populate different fields: embedType, embedName, embedData etc; else if "embed" then ...

Option #2:
we could add a layer of abstraction and move everything related to the "container" to a "ContainerType" class, that way fields related to the "container" are separated from fields of "TaskAttributionType" -- which is intended to contain things related to type of work eg. scriptUrl etc.

Personally I am leaning towards #2.

@domenic
@igrigorik

PerformanceLongTaskTiming millisecond round vs floor

Currently, we use floor in PerformanceLongTaskTiming, but this means that the accuracy of the timestamps is not as good as it could be. On the other hand, using round would mean that the end_time would sometimes point to a timestamp in the future. What is a good solution that improves accuracy while not causing issues?

Typo

s/prevent the user from interactive /prevent the user from interacting /

containerType is confusing

Hi.

I am trying to implement longtask's logging on the project, but it's not clear how to find the source of problem from entry content.

Screenshot 2019-04-05 at 17 16 03
MacOS 10.14.4
Chrome Version 73.0.3683.86 (Official Build) (64-bit)

May you please explain how to actually find what causing this longtask?
It may also be an example that may be added to the spec.

As far as I understand name self means that longstack was produced by current browser context, but in the same time I have containerType iframe, thats a little bit confusing.
Main source of information here: TaskAttributionTiming, but containerName, containerId and containerSrc are empty and I even don't clearly understand what exactly they should contain from specification. Example would be highly appreciated.

With all best and kind regards. Anton.

Provide an option for overriding the 50ms threshold

50ms is good as the minimum threshold.
However there is a lot of content on the web today that generates a lot of 50ms tasks, so it can spammy to start with. It would be great if developers could increase the threshold.
Ideally this could be an argument to observe()

Specify Possible "Task Types"

In order to provide attribution, we need a common notion of task types.

Let's come up with a proposal for a set of task types all UA's can agree on.

In the short term:

  • Don't name them script

Strawman: provide attribution for:

  • Render steps
  • Other browser work
  • Posted tasks

Attribution for alert() or other tasks blocked by user input

alert() triggers a task which requires user input to be completed, so it is special in that regard. We could add a flag to indicate 'tasks blocked on user input'. But perhaps this is not needed once we can use attribution from the JS Self Profiling API.

Change name to be less confusing

LongTasks can report long tasks that come from sources other than just scripts. Thus, the name should be renamed because 'script' makes it sound like it's only reporting tasks from scripts.

How to clear longtask entries?

I couldn't find anything in the spec on how long task entries get cleared from the performance timeline. I probably don't need to explain why a full timing buffer isn't great haha.

To my knowledge the current spec only defines:

  • performance.clearMarks()
  • performance.clearMeasures()
  • performance.clearResourceTimings()

How would you go about clearing these? Did I miss something?

toJSON

toJSON() is now mandatory for PerformanceEntry. Should it also be explicitly added to the interface declarations for PerformanceLongTaskTiming and TaskAttributionTiming, which have additional attributes?

Event loop timing reporting seems to ignore reentrancy

The event loop is currently still reentrant. It's not entirely clear to me the processing model takes that into account. That is, you might get the wrong results for certain tasks that perform expensive operations.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.