Giter VIP home page Giter VIP logo

freeze-dry's Introduction

Freeze-dry: web page conservation

Freeze-dry captures a web page as it is currently shown in the browser. It takes the DOM, grabs its subresources such as images and stylesheets, and compiles them all into a single string of HTML.

The resulting HTML document is a static, self-contained snapshot of the page, that could for example be used for archival, offline viewing, or static republishing; it could be saved on a usb stick or attached to an email, and be opened on any device.

Technically, freeze-dry is a JavaScript function that is run on a web page. It is mainly intended for use by browser extensions and headless browsers. Much of its behaviour can be customised if desired.

How does it compare to…

Freeze-drying a web page is comparable to making a screenshot, or ‘printing’ to a PDF file. But the snapshot adapts to the viewer’s screen size, allows text to be selected, can be read by a screen reader, and so on; just as it would on the original web page.

It is thus more comparable to web browsers’ “Save As…” feature, except that it puts page resources inside the file (not in a folder next to it), and it captures the current view, after scripts executed (and it removes the scripts).

Freeze-dry is most similar to what browser extensions like SingleFile or WebScrapbook do. It is used in (and spun off from) the WebMemex browser extension.

But the main difference from all the above: freeze-dry is a JavaScript/TypeScript module, and highly customisable, so it can be used in other software for various snapshotting (or other) purposes.

For example, the researchers at Ink & Switch found freeze-dry their favorite solution to make web page clippings for their Capstone creativity tool:

“The solution we settled on for Capstone is freeze-dry. Its use was just a few lines of code.

Freeze Dry takes the page’s DOM as it looks in the moment, with all the context of the user’s browser including authentication cookies and modifications made to the page dynamically via Javascript. It disables anything that will make the page change (scripts, network access). It captures every external asset required to faithfully render that and inlines it into the HTML.

We felt that this is a philosophically-strong approach to the problem. Freeze-dry can save to a serialized .HTML file for viewing in any browser; for Capstone, we stored the clipped page as one giant string in the app’s datastore.”

How does it work?

As a first approximation, freezeDry can be thought of as a simple function that captures the DOM and returns it as a string, like this:

async function simpleFreezeDry() { return document.documentElement.outerHTML; }

However, freezeDry does a lot more: inline frame contents and subresources (as data: URLs), remove scripts and interactivity, expand relative links, timestamp the snapshot, etc.

For a detailed explanation, see How freeze-dry works.

Install

Old-fashioned JS

For a good old Javascript global variable, download the latest .umd.js script and include it among your scripts, e.g.:

<script src="./freeze-dry.umd.js"></script>

The freeze-dry function is then freezeDry.freezeDry() (adjust example code accordingly).

ES module

For using it as a module in the browser, download the latest .es.js module and import it in your code, e.g.:

import freezeDry from './freeze-dry.es.js'

NPM package

For use via npm/yarn/… (to bundle it with webpack/rollup/vite/…), download the package, e.g.:

npm install freeze-dry

Then, in your code, either import or require it:

import freezeDry from 'freeze-dry'

const { freezeDry } = require('freeze-dry')

Usage

const html = await freezeDry(document, options)

In a few seconds, freezeDry should return your snapshot as a string (potentially a very long one).

The options parameter is optional. In fact, document is optional too (it defaults to window.document). For usage details, see its documentation.

Customising freeze-dry’s behaviour

The options argument to the freezeDry() function lets you tweak its behaviour. For example, instead of inlining subresources as data: URLs, you could store the subresources separately; perhaps to create an MHTML file, or to store each resource on IPFS. See the FreezeDryConfig documentation for all options.

If freezeDry’s options don’t suffice for your needs, you can even build your own custom freezeDry-ish function by directly using freeze-dry’s internals. To get started, have a look at the API documentation, especially the Resource class, and peek at the implementation of FreezeDryer.

freeze-dry's People

Contributors

gozala avatar reficul31 avatar treora avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

freeze-dry's Issues

Stop adding <base href>, always rewrite relative URLs

Unfortunately a <base href='...'> is also applied on relative links within a document, e.g. href="#section3". It would be nice to keep those internal links relative. Rewriting relative hrefs rather than using a base element seems the easiest (only?) solution.

SVG support

Needs investigation, fixing, and tests.

Some problems already apparent:

  • an SVG's <a> element's .href property appears to not be a string, while we assume it to be.
  • an SVG can link to subresources, which we currently ignore; need to handle these in extract-links, as well as when crawling subresources)

Fix charset encoding of framed documents

Like issue #29, but for subdocuments inside frames. As remarked here:

        get blob() { return new Blob([this.string], { type: 'text/html' }) },
        get string() {
            // TODO Add <meta charset> if absent? Or html-encode characters as needed?
            return documentOuterHTML(clonedDoc)
        },

The same applies to crawl-subresources for frames whose inner document we cannot access directly.

It seems new Blob() always utf-8-encodes given strings (mdn). I suppose we should either add <meta charset="utf-8"> to the DOM before running documentOuterHTML. Alternatively, we change the blob’s MIME type to text/html;charset=utf-8; something we could not do for the top-level document — might that be ‘cleaner’?

Problem observed in the wild.

Add a "who's using freeze-dry" section to readme

Found this lib while googling for an end-user tool. Although I'm no JS/TS developer who can benefit from it, I could use a list of tools using this lib. It would also give more visibility of the library itself, apart from the depending software.
Just adding a section with a few known users would suffice, hoping for others using it to add themselves to the list with a simple PR.

Ongoing development

I'm a little worried to see the WebMemex projects haven't been worked on in quite some time. I know they're both considered stable, but when it comes to browser-related projects, things have a tendency to change at a pretty fast pace, to where a few years makes a very big difference.

I've been looking at freeze-dry as an alternative to using SingleFile's CLI, since it lets you save assets without converting them to embedded base64 data, which is good for making continuous backups. But this project hasn't been touched in a year and the browser extension in three years, so I'm not sure whether to invest the time in adapting my backup scripting to use freeze-dry instead or not.

Sorry to be a nag, but it would be very helpful to know. Thank you.

ESM support

Hi @Treora

I would like to use without build steps in the environment that has ESM support. There are few things that prevent it right now:

  1. ES modules require .js file extension to work. Given the source code is authored in ESM it should not be fairly simple to update all relative imports so they have file extension.
  2. External dependencies. Sadly there is no simple solution for that. But I would like to suggest following workaround:
  • Replace improts like import documentOuterHTML from 'document-outerhtml' with something like
    import documentOuterHTML from '../../modules/document-outerhtml/index.js'
  • Create simple files like modules/document-outerhtml/index.js that just do module.exports = require('document-outerhtml')

No 2 is far from ideal but it would make it fairly easy for anyone to remap that to whatever they need. Although it does incur extra maintenance burden here.

I'll probably do one or possibly both of these things as I'd like to make use of this code & will be happy to upstream changes to either or both if that sounds reasonable.

Capture dynamically inserted CSS rules

Scripts can modify stylesheets using the CSSOM functions like CSSStylesheet.insertRule(), used by e.g. emotion.js.

The contents of a <style> element appear to not be updated to reflect the new rules, so I suppose we will thus have to do this ourselves, by going through the rules using CSSOM. Or we take another approach altogether to preserve styles.

In the wild, I observed the problem with images on NYTimes articles, which become a mess (they use emotion.js).

Handle charset encoding declaration

The document may have a <meta charset="..."> tag in the <head>, but that will be obsoleted as we use the parsed document, and later stringify it again. I suppose we could/should delete it from the DOM when capturing it.

Vice versa, we may want to add the appropriate <meta charset="..."> tag to the snapshot; but this seems a task for the application invoking freeze-dry, as we do not know in which encoding the application will store the string.

We could thus..

  • leave the snapshot without charset declaration, tell callers to add it themselves. But they won't have the parsed DOM, making this a hassle.
  • Easier then is to let the application tell the desired encoding tag as an option to freezeDry(...).
  • Alternatively, we could html-encode all characters so our string only contains plain ASCII, which I presume (rightly or wrongly?) removes the need for declaring the charset.

Make idempotent.

Freeze-drying an already freeze-dried page would ideally not have any effect. Not sure if that's the case now.

@reficul31: may be nice to add a test for this in the integration tests, that takes the output (snapshot) and applies freezeDry to it again.

Deal with <canvas> elements

I just noticed these lines in pagearchive:

if (el.tagName == 'CANVAS') {
  return '<IMG SRC="' + htmlQuote(el.toDataURL('image/png')) + '">';
}

Add provenance metadata

It would be valuable to retain the snapshotted document's URL somewhere, as well as the time of capture, and possibly other metadata. I am not sure whether this should be a task of freeze-dry itself, or of the application invoking it.

My current disposition is towards adding <meta> tags to the snapshot's <head> to add the snapshot's URL and date. The Memento protocol has specified HTTP headers for exactly this purpose, which we could pour into meta tags as such (as discussed on memento-dev):

<meta http-equiv="Memento-Datetime" content="Wed, 30 May 2007 18:47:52 GMT">
<link rel="original" href="https://example.org/">

This feature, as well as the current practice of keeping data-original-... attributes to retain the URLs of subresources, should probably be optional.

Keep <noscript> when appropriate

This was previously issue #134 in webmemex-extension ("Images not in snapshots from Medium.com").

When the page was viewed with javascript disabled, we should keep <noscript> tags, and perhaps convert them into <div> tags, in order to make the snapshot correspond to what was rendered. See the corresponding comment in the source:

// If noscript content was not shown, we do not want it to show in the snapshot either. Also, we
// capture pages after scripts executed (presumably), so noscript content is likely undesired.
// TODO We should know whether noscript content was visible, and if so keep it in the doc.
// TODO Keep noscript content in fetched iframe docs, as scripts have not been executed there?
const noscripts = Array.from(doc.querySelectorAll('noscript'))
noscripts.forEach(element => element.parentNode.removeChild(element))

Upstream freeze-dry dependency doesn't work with yarn

We get this error message:

$ capstone [master ≡ +0 ~4 -0 !]> yarn
yarn install v1.10.1
[1/4] Resolving packages...
[2/4] Fetching packages...
error [email protected]: The engine "node" is incompatible with this module. Expected version "6.X.X". Got "8.9.0"
error Found incompatible module
info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.

The workaround is to run yarn with --ignore-engines, but it's sort of a pain.

Handle iframes with srcdoc

We currently do not handle content of an <iframe srcdoc="....">. We inline the iframe's inner doc, when available, into the src as a data: URL, regardless whether it came from the src or from a srcdoc; but a viewer will use the srcdoc instead of src when it is available, thus things break.

We could choose to always remove srcdoc attributes (or always use them), or we could process srcdoc content as a subresource (except we don't even need to fetch it, and its base URL is equal to the parent document's base URL) and keep it in the srcdoc attribute. Or something in between.

Only grab necessary subresources

Currently, we inline all resolutions listed in an <img>'s srcset, all <audio> and <video> sources, all stylesheets, etcetera. This makes snapshots huge. The upside is that the snapshot will be as rich as the original, and more likely to work and look as intended in various browsers and screen resolutions. Depending on the application, one or the other factor may be more important, so it would be nice to make configurable how much we grab. Some preliminary thoughts on this:

  • One reasonable desire is to grab only things that are currently in use (if this can be tested for). This could help a lot with speeding freeze-dry up, as those things may be available from cache.

  • For images with multiple resolutions, we could read element.currentSrc, and only grab that one. And/or perhaps get the one with highest resolution.

  • For audio and video, the sources are usually different file formats; currentSrc seems a reasonable choice again, or some prewired preference to pick a widely supported and/or well compressed format (again a possible trade-off).

  • For stylesheets, we may filter by media queries, both in a media attribute on a <link> (to omit the whole stylesheet), or @media at-rules inside stylesheets (to omit the subresources it affects). The next question is then what media queries to fliter for; type (screen/print), window size; possibly again only take what is currently active.

  • For fonts, we could take only the ones currently used/loaded (how? the status attributes of fonts in document.fonts?). And we could hard-code a preference for some well compressed and/or widely supported file format.

Could you please publish used deps onto git as well

I would really like to use this with plain ESM & without build steps. Other than #35 I also run into issue that some of the dependencies used aren't published in non compiled form anywhere. @Treora Given that you're the author of some of those deps would you mind publishing them to github as well ?

Thanks

CSP in cloned DOM affects live page

On Firefox, after storing a page (at least if done early), scripts on the page can often not reach the web anymore. This appears to be a bug in Firefox, so I filed it there.

Use a type system

Use JS classes, flow, typescript, mere jsdoc @types, ...?

Some objects especially worth creating types for, as we pass them around:

  1. parse functions and their return value: [{ token, index, note? }, ...].
  2. attributeInfo (documented here).
  3. link objects (documented here).
  4. resource objects (documented here), probably with subtypes HtmlResource/CssResource.

Allow freeze-drying a document snippet

I would like to enable calling freezeDry(element), freezeDry(range), and get back a string that serialises the given Element/Range (possibly also DocumentFragment, array of elements, ...). This would be useful to enable extracting e.g. a single comment from a page, freeze-drying a selection for copy/pasting into another document, etcetera.

Most of our DOM transformations are already written to act on a given rootElement, which need not be the whole document. Hence, exposing this possibility in the API should in theory not be that hard. However, some complications will have to be considered:

  • Stylesheets outside the snippet influence its presentation, and will need to be inlined into the snippet; probably into per-element style="..." attributes, as <style scoped> never became anything.
  • Ancestor elements may influence the snippets meaning/presentation; e.g. if the element/range is within a <b> element. Furthermore, an element may only be valid inside particular parent elements; e.g. a <tr> needs to be inside a <table>. Depending on the use case, it may or may not be desirable to retain such a <b>, and to wrap such a <tr> with a <table>.
  • As we do not return a whole document, we cannot add a content security policy in a <meta> tag; we need to be even more sure that the output is completely clean, if the snippet ought to be usable in any html document.
  • Probably more...

To do: look into how browsers copy selections to the clipboard; at least Chromium seems to do some effort to inline styles and wrap elements in order to keep the selection's presentation intact.

Do not make within-document links absolute

Relative links pointing to a place within the same document (= only containing a fragment identifier = the href starts with '#') should not be made absolute. This was the case before (see #6 and commit d8eb036), but probably regressed in the rewrite (v0.2). We probably just have to change makeLinksAbsolute.

How to use in browser

I'm having some trouble figuring out how to use this library in the browser. I've taken a look at the way you use it in webmemex-extension, and I'm struggling to figure out a way to use freeze-dry ad-hoc through the browser console.

Any thoughts?

Option to try erase hidden information

One of the use cases of freeze-dry is to snapshot web pages in order to share them with others. If a page is personalised, e.g. a user snapshots their shopping cart of a web shop, the page may contain private information one would rather not share. If that information is visible, the user can notice it and choose not to share (or could edit the page with other tools). But if the information is hidden in the page, for example when a session ID or anti-CSRF token is stored in a hidden input field, they might accidentally share private information they could not see themselves.

I once heard that this risk of accidentally sharing hidden, sensitive information was one of the reasons for Mozilla’s PageShot experiment to finally not capture the DOM and only output a screenshot (despite the excellent work at capturing the DOM, similar to freeze-dry).

Freeze-dry already removes javascript, which removes one potential source of hidden information. We could also consider adding an option to remove <input type="hidden"> elements. And perhaps data-… attributes? Are there other invisible elements/attributes that are often used for sensitive data, and that we should thus consider to filter out?

Of course such a filtering approach will never guarantee cleanness, but it could probably weed out most of the cases. Interestingly, PageShot got a bit closer to a guarantee by taking the inverse approach: not cloning the whole DOM and filtering things out, but trying to only pick the elements and attribute types that it knows about.

Of course, in many use cases one may also want to remove everything that is invisible simply for reducing the size of the output. Ideally, various types of DOM transformations like these would not be implemented in freeze-dry itself, but could be plugged in. But I’ll park the issue here for the time being.

Resolve redirects

Too many links are nowadays obscured by link shorteners and tracker URLs. For example, on Twitter, a link would point to https://t.co/1PT68A6LEt when the author meant to refer to https://voice.mozilla.org/. Learning the intended link target requires querying the shortener service, thus depending on external service to still exist and be reachable. Not so nice.

We could therefore consider href values of such links to be resources that belong to the document, and should thus be fetched and stored. It may be tough to decide when a link is an undesired redirect, and when it is a 'legit' redirect that should be retained. One approach is to always resolve all redirects. The original URL would of course be kept as an extra attribute.

A question is still whether we can actually obtain the redirection location. fetch(url, {method: 'head'}) sounds appropriate, but looking at the fetch specification (here), it looks like it might hide all redirection information for security reasons..

bug due to querySelector(All) assumptions

At least one bug is caused by using querySelectorAll and assuming it only returns HTML elements:

const linkElements = Array.from(rootElement.querySelectorAll('a, area'))
linkElements
        .filter(element => element.href.startsWith('javascript:'))

The HTML <a> and <area> elements guarantee that .href is a string (an empty string if the attribute is absent). But SVG’s <a> element does not, making these lines throw an error (discovered in the wild).

Need to check all uses of querySelector(All). Maybe we could…

  • do an instanceof check on the resulting elements;
  • or just check for the existence of the href attribute (or should we avoid interfering with unexpected namespaces?);
  • or we could select the elements in some other way, e.g. rootElement.getElementsByTagNameNS('http://www.w3.org/1999/xhtml','a').

Seperately (in scope of issue #27) we should check if javascript: URLs should be removed from SVG’s xlinks.

Allow alternative blob seralizations

I would like to create service using freeze-dry that would essentially act as IPFS archiving proxy. I'm iterating on it here https://glitch.com/edit/#!/clone

Problem that I'm running into few issues:

  1. Right now freez-dry attempts to create data URLs for everything, which is not what I would like to do, I want save data to the IPFS and use URL that would correspond to that. I did changes along those lines in the past Gozala@161c4fe would you be willing to accept pull that applies them on top of current master ?
  2. Another issue I'm running into is that server often times-out before freeze-dry is finished. I'm starting to suspect that freezeDry fails silently (e.g. when loading https://clone.glitch.me/http://jeditoolkit.com/) I see all requests completing but promise never seems to resolve. Either way I would like to find a way for not having to fetch all resources before being able to serve response. Ideally API would allow me to update all referenced URLs and give me back updated markup without waiting to fetch them, allowing me to serve html and letting me to handle referenced resources on demand. Basic idea is that you don't necessarily need to finish creating a bundle. I'm not exactly sure yet how an API like that would look like but general idea would be something along these lines:
const archive = freezeDry.archive(doc, {...})
// archive maintains map of resources that correspond to a document
// and .fetch returns either returns pending or fulfilled request corresponding to the
// resource URL. Or starts one if not initiated yet.
const page = await arhive.fetch(doc.URL)
// That way proxy server can serve requests from the archive as it's being
// build up.
const css = await archive.fetch(new URL('style.css', doc.URL))
// ....

// Completes whenever all of the resources are finished
const bundle = await archive.write({
  // 
  open: async (metadata) => new Bundler(),
  write: async (resource, bundler) => bundler.write(resource)
  close: async (bundler) => await bundler.writeToFile()
})

Is it possible to use freeze-dry from server?

Hi! Thank you for this awesome library!

I'm building a simple website archival API (currently just submits URLs to selected archive sites) and I'd love to add freeze-dry as an addition to it -- I am relatively noob to the javascript world though, so I'm a bit lost on how to approach this;

I understand freeze-dry runs in the browser context (?), so something like playwright will be needed to do this which is what I've been trialing.

I tried to modify and run the playwright tests in the 'customisation' branch as a hacky starting point, and I'm currently stuck with this error when running npm run test

page.evaluate: ReferenceError: freezeDry is not defined

   > 17 |   const html = await page.evaluate('freezeDry(document, { now: new Date(1534615340948) })')
           |                           ^
      18 |   console.log(html)

Handle 404s and mismatching resource types

For example, we currently happily inline an html 404 page as if it was the desired resource, producing e.g. <img src="data:text/html;base64,......">. We could consider alternatives, such as replacing such URLs with about:invalid.

Inline iframe contents

Freeze-dry could be run recursively on iframes. Iframe contents can probably be put as a string in the srcdoc attribute.

Although deprecated, it would be nice to still support <frame>s too; they don't support srcdoc though, so we should try putting contents as a data URL in the src attribute.

Provide the option to get resources separately.

Instead of inlining everything, we could offer to return the resources separately. The application can then store it in whichever way it likes, and could provide the URL to replace the original URL with.

Particularly useful for deduplicating the resources, possibly using a content-addressing scheme.

Breaks on invalid URLs

A link such as <a href="http://"> somewhere in the document causes freeze-dry to throw an error.

Fails to fetch images (etc.) outside CSP

We refetch each resource (from cache, if possible) to obtain the content of e.g. images, because it seems there is no way to read it directly (if it would come from the same origin, drawing it onto a canvas could help get the data). However, fetching a resource may be restricted more strictly than loading images by the page's content security policy, causing the fetch to fail. I don't know of a way to get around this, except by delegating the fetching to be run elsewhere (in the WebMemex, fetching could be done in the extension's background script).

Inlining corrupt stylesheets can corrupt html

We currently turn a <link rel="stylesheet" href="..."> into a <style> element with the resolved contents of that URL. If resolving results in a 404 html page for example, its content will be inserted into the document and mess it up completely. Things to look at and consider changing:

  1. Sanitise the content; set innerText rather than setting innerHTML?
  2. Put the stylesheet contents as a data URL in the link's href, instead of creating a <style> element; this was the initial approach but it was changed because of a performance problem in Firefox (could that be fixed in Firefox?).
  3. Try send the proper Accept headers when fetching the stylesheet.

Allow passing custom grabber for frame contents

As explained in src/Readme:

Although we try to clone each Document living inside a frame (recursively), it may be impossible
to access these inner documents because of the browser's single origin policy. If the document
inside a frame cannot be accessed, its current state cannot be captured. ...
When freeze-dry is run from a more privileged environment, such as a browser extension, it could
work around the single origin policy. A future improvement would be to allow providing a custom
function getDocInFrame(element) to enable such workarounds.

Handle encoding of subresources

Freeze-dry messes up if a stylesheet or framed document is encoded in utf16, utf32, or possibly other encodings. We use FileReader.readAsText to decode these resources, which by default assumes utf8 encoding. This assumption is adequate most of the time, but when it isn’t the resource is effectively unreadable.

I do not know enough about the standards, but I suppose the decoder should look at the HTTP Content-Type header, the file’s byte order mark (BOM), and in-document declarations (@charset in CSS, <meta charset=…> in HTML).

This detection&decoding issue seems so generic it should not have to burden this repo, but I have not yet discovered the right tool. Some options I thought of:

  • The browser’s fetch, but unfortunately appears not to help with decoding; its Response.text() is spec'd to "return the result of running UTF-8 decode on bytes".
  • XMLHttpRequest.responseText does seem to respect HTTP header and BOM, though I am not sure about in-document declarations. And it feels a little outdated, as I think fetch was supposed to make it obsolete; but perhaps not.
  • Some javascript module? I did not yet find anything that comes close.

Tips welcome.

Note this issue is similar to issue #29, but that one concerns the DOM that the browser has already decoded for us; this issue is about subresources we fetch.

Allow getting the result before completion.

A slow refetch of a resource can now slow down the whole procedure. To allow more control, we could accept a timeout as an argument, and return whatever is ready at that moment. But rather, I would let the application request the result when it wants to have it, which it could possibly do multiple times (we could e.g. emit an event when the intermediate result has been updated).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.