Giter VIP home page Giter VIP logo

wikiteam / wikiteam Goto Github PK

View Code? Open in Web Editor NEW
690.0 40.0 144.0 54.93 MB

Tools for downloading and preserving wikis. We archive wikis, from Wikipedia to tiniest wikis. As of 2023, WikiTeam has preserved more than 350,000 wikis.

Home Page: https://github.com/WikiTeam

License: GNU General Public License v3.0

Shell 0.21% Python 93.28% Perl 3.28% TeX 3.23%
wiki wikiteam wikipedia archive-wikis backup dump export mediawiki python digital-preservation

wikiteam's Introduction

WikiTeam

We archive wikis, from Wikipedia to tiniest wikis

WikiTeam software is a set of tools for archiving wikis. They work on MediaWiki wikis, but we want to expand to other wiki engines. As of 2023, WikiTeam has preserved more than 350,000 wikis, several wikifarms, regular Wikipedia dumps and 34 TB of Wikimedia Commons images.

There are thousands of wikis in the Internet. Every day some of them are no longer publicly available and, due to lack of backups, lost forever. Millions of people download tons of media files (movies, music, books, etc) from the Internet, serving as a kind of distributed backup. Wikis, most of them under free licenses, disappear from time to time because nobody grabbed a copy of them. That is a shame that we would like to solve.

WikiTeam is the Archive Team (GitHub) subcommittee on wikis. It was founded and originally developed by Emilio J. Rodríguez-Posada, a Wikipedia veteran editor and amateur archivist. Many people have helped by sending suggestions, reporting bugs, writing documentation, providing help in the mailing list and making wiki backups. Thanks to all, especially to: Federico Leva, Alex Buie, Scott Boyd, Hydriz, Platonides, Ian McEwen, Mike Dupont, balr0g and PiRSquared17.

Documentation Source code Download available backups Community Follow us on Twitter

Quick guide

This is a very quick guide for the most used features of WikiTeam tools. For further information, read the tutorial and the rest of the documentation. You can also ask in the mailing list.

Requirements

Requires Python 2.7.

Confirm you satisfy the requirements:

pip install --upgrade -r requirements.txt

or, if you don't have enough permissions for the above,

pip install --user --upgrade -r requirements.txt

Download any wiki

To download any wiki, use one of the following options:

python dumpgenerator.py http://wiki.domain.org --xml --images (complete XML histories and images)

If the script can't find itself the API and/or index.php paths, then you can provide them:

python dumpgenerator.py --api=http://wiki.domain.org/w/api.php --xml --images

python dumpgenerator.py --api=http://wiki.domain.org/w/api.php --index=http://wiki.domain.org/w/index.php --xml --images

If you only want the XML histories, just use --xml. For only the images, just --images. For only the current version of every page, --xml --curonly.

You can resume an aborted download:

python dumpgenerator.py --api=http://wiki.domain.org/w/api.php --xml --images --resume --path=/path/to/incomplete-dump

See more options:

python dumpgenerator.py --help

Download Wikimedia dumps

To download Wikimedia XML dumps (Wikipedia, Wikibooks, Wikinews, etc) you can run:

python wikipediadownloader.py (download all projects)

See more options:

python wikipediadownloader.py --help

Download Wikimedia Commons images

There is a script for this, but we have uploaded the tarballs to Internet Archive, so it's more useful to reseed their torrents than to re-generate old ones with the script.

Developers

Build Status

You can run tests easily by using the tox command. It is probably already present in your operating system, you would need version 1.6. If it is not, you can download it from pypi with: pip install tox.

Example usage:

$ tox
py27 runtests: commands[0] | nosetests --nocapture --nologcapture
Checking http://wiki.annotation.jp/api.php
Trying to parse かずさアノテーション - ソーシャル・ゲノム・アノテーション.jpg from API
Retrieving image filenames
.    Found 266 images
.
-------------------------------------------
Ran 1 test in 2.253s

OK
_________________ summary _________________
  py27: commands succeeded
  congratulations :)
$

This use of GitHub is not an endorsement

This project is currently hosted by GitHub for legacy reasons. GitHub is not recommended as it's a service running on proprietary software and does not respect copyleft.

Free software needs free tools: support the campaign Give up GitHub from the Software Freedom Conservancy.

(This section is released under CC-0.)

Logo of the GiveUpGitHub campaign

wikiteam's People

Contributors

ab2525 avatar alexia avatar balr0g avatar danieloaks avatar elifoster avatar emijrp avatar erkan-yilmaz avatar gt-610 avatar hashar avatar hydriz avatar makoshark avatar mirkosertic avatar mrshu avatar nemobis avatar nsapa avatar pirsquared17 avatar plexish avatar pokechu22 avatar rhinosf1 avatar robkam avatar scottdb avatar seanyeh avatar shreyasminocha avatar simonliu99 avatar southparkfan avatar timgates42 avatar timsc avatar vss-devel avatar yzqzss avatar zerote000 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

wikiteam's Issues

Create script to automatically download a list of wikis and upload dumps to Internet Archive

From [email protected] on July 16, 2011 15:11:19

This needs at least issue 8 and issue 34 to be fixed, I'd say; also issue 32 , issue 33 would be nice. It's not a project to start immediately.

The script would take a list of wikis (api.php links; perhaps index.php could suffice), with their license if necessary; start dump and bring it to completion (or at least report failed dumps and resume them on next run); check integrity; create 7z archive for history and tar for images; create a csv for bulk upload to Internet Archive ( https://wiki.archive.org/twiki/bin/view/Main/IAS3BulkUploader ) with identifier equal to wiki domain/URL, standard description and tags, licenseurl as extracted from API or specified in original list; start bulk upload (the script already has autoresume/retry features).

In addition to normal dumpgenerator options, it would probably be useful to be able to specify how many parallel downloads (dumpgeneratore instances should be run).

Original issue: http://code.google.com/p/wikiteam/issues/detail?id=35

The API check should check that the API actually works

From [email protected] on April 08, 2012 09:38:00

http://dachaja.net/api.php taken from the new list:

Checking api.php...
api.php is OK
Traceback (most recent call last):
File "dumpgenerator.py", line 1029, in
main()
File "dumpgenerator.py", line 838, in main
config, other = getParameters(params=params)
File "dumpgenerator.py", line 795, in getParameters
if checkIndexphp(config['index']):
File "dumpgenerator.py", line 818, in checkIndexphp
f = urllib2.urlopen(req)
File "/usr/lib/python2.7/urllib2.py", line 126, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 400, in open
response = meth(req, response)
File "/usr/lib/python2.7/urllib2.py", line 513, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python2.7/urllib2.py", line 438, in error
return self._call_chain(_args)
File "/usr/lib/python2.7/urllib2.py", line 372, in _call_chain
result = func(_args)
File "/usr/lib/python2.7/urllib2.py", line 521, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 500: Internal Server Error

The wiki has some database problems, so some queries work and some don't. For instance http://dachaja.net/api.php?action=query&meta=siteinfo :

and by the way the main page gives a DB error

from within function "SqlBagOStuff::get". Database returned error "144: Table './Dachaja_net/Dachaja_netobjectcache' is marked as crashed and last (automatic?) repair failed (localhost)".

Original issue: http://code.google.com/p/wikiteam/issues/detail?id=48

Dumpgenerator.py script fails when the main page isn't "Main Page"/Special:Version is localised

From [email protected] on December 04, 2011 10:25:12

When trying to run dumpgenerator.py on some wikis of Referata (using the script found in trunk/referata), a 404 error occurs when the main page is located, say Project:Main Page instead of the usual "Main Page".

Also, the dumpgenerator.py listens to only Special:Version, localised names of it (e.g. Especial:Versión) would bring disaster.

Shouldn't affect the original dump, though I am still reporting this error in case people actually read index.html/SpecialVersion.html of dumps.

Original issue: http://code.google.com/p/wikiteam/issues/detail?id=43

Wikia images fail

From [email protected] on September 08, 2011 22:00:00

Bug report sent by Schbirid in IRC.

...
Wrack, 2 edits
Wrath, 13 edits
Xaero, 17 edits
YA, 1 edits
Yellow Armor, 16 edits
Zombie, 3 edits
Zombie (Q1), 38 edits
Downloaded 2860 pages
Zombie (Q4), 16 edits
XML dump saved at... quakewikiacom-20110908-history.xml
Retrieving image filenames
<!doctype html>

<meta property="og:type" content="article" />

<meta property="og:site_name" cont
This wiki doesn't use marks to split contain

Original issue: http://code.google.com/p/wikiteam/issues/detail?id=39

Download history of multiple pages at once

From [email protected] on July 09, 2011 17:50:35

I don't understand the code much so excuse me if I say silly things.
The script currently seems to download the history of a single page at once. To reduce the number of requests and making things faster, it would probably be better if it could ask the API for the history of multiple titles at once, after checking this won't cross the revisions limit.

Original issue: http://code.google.com/p/wikiteam/issues/detail?id=18

Citizendium image list error

Error loop "XML for ... is wrong"

From [email protected] on July 12, 2011 20:21:47

Apparently this error is quite frequent with some characters. This starts a neverending loop, see e.g. http://p.defau.lt/?vUHNXKoaCOfNkeor_0HmCg I removed that title from the title list and resumed the dump; the following pages were not downloaded, perhaps because they were invalid: http://p.defau.lt/?KeDck2rQZqGlp9MWmYmB_Q Could those (invalid?) titles be the actual problem behind the error?

Original issue: http://code.google.com/p/wikiteam/issues/detail?id=26

Convert .xml.7z to xml.bz2 for wiki2touch

From [email protected] on September 16, 2011 10:21:18

What is the expected output? What do you see instead? articels.bin but I get a messsage that nothing was done by the wiki2touch transfer tool. What version of the product are you using? On what operating system? Wiki2touch transfer tool 0,65 and windows 7 Please provide any additional information below. Hello,

first of all thanks for your great project. I am using an iPhone with wiki2touch for offline Wikipedia and Wikitravel. In a few days I will start a long journey and so I was looking for an current Wikitravel dump. I got the problem that with your supported dump (.xml.7z) my converter for wiki2touch (wiki2touch transfer tool) is not working. It is expecting an xml.bz2 file. Just renaming doesent work. Any idea what I can do to get up to date articels.bin from wikitravel?

Original issue: http://code.google.com/p/wikiteam/issues/detail?id=40

Archive is created for the dump before it's complete

From [email protected] on April 12, 2012 08:13:55

launcher.py sometimes fails its checks that a dump is complete before sending it to 7z. I've seen several archives which didn't even contain a *-history.xml, and I have now one which was interrupted with error:

Downloaded 46310 pages
唐诗宋词/唐朝/齐己/湘中春兴, 1 edits
Server is slow... Waiting some seconds and retrying...
An error have occurred while retrieving "唐诗宋词/唐朝/齐己/湘中送翁员外归闽"
Please, resume the dump, --resume

and whose dump ends with:

{{唐诗宋词/索引底部}}

which has been happily compressed and forgotten.

Original issue: http://code.google.com/p/wikiteam/issues/detail?id=50

split('<page>\n') includes <title> and <id> several times in > 1000 edits histories

From [email protected] on April 23, 2011 20:16:21

File "dumpgenerator.py", line 279, in getXMLPage
xml = xml.split('')[0]+xml2.split('\n')[1]

<title>Main Page</title> 15580374 418009832 2011-03-09T19:57:06Z Krinkle
    <id>9014223</id>
  </contributor>
  <minor/>
  <comment>Bug in software ?</comment>
  <text xml:space="preserve" bytes="5244">&lt;!--        BANNER ACROSS TOP OF PAGE        --&gt;

Original issue: http://code.google.com/p/wikiteam/issues/detail?id=9

Wikis with poor-man's HTTP Basic Auth fail with HTTP 401

From [email protected] on October 24, 2011 22:03:55

What steps will reproduce the problem? 1. Use the current dumpgenerator.py on wiki.musicbrainz.org (no API, only index, at http://wiki.musicbrainz.org/ )
2. Watch it fail What is the expected output? What do you see instead? Should start dumping. Instead, throws urllib2 error for HTTP 401 What version of the product are you using? On what operating system? Current SVN, Arch Linux, Python 2.7.2 Please provide any additional information below. Attached file fixes the problem for me (I've added --username and --password to config and then use them, when they're set, to add the proper Basic: HTTP header).

Attachment: dumpgenerator.py

Original issue: http://code.google.com/p/wikiteam/issues/detail?id=41

commonsdownloader terminates for special characters

From [email protected] on March 01, 2012 08:31:00

Traceback (most recent call last):
File "commonsdownloader.py", line 150, in
main()
File "commonsdownloader.py", line 126, in main
if not os.path.getsize('%s/%s' % (savepath, img_saved_as_)): #empty file?...
File "/usr/lib/python2.7/genericpath.py", line 49, in getsize
return os.stat(filename).st_size
OSError: [Errno 2] No such file or directory: '2005/03/23/20081110210524!"Colored"_drinking_fountain_from_mid-20th_century_with_african-american_drinking.jpg'

Original issue: http://code.google.com/p/wikiteam/issues/detail?id=45

Get index.php URL from API

From [email protected] on April 09, 2012 00:26:39

When API is provided, dumpgenerator shouldn't try to guess the URL of index.php, it must take it from the "script" value in https://www.mediawiki.org/wiki/API:Meta#siteinfo_.2F_si I'm currently getting a lot of "Error in index.php, please, provide a correct path to index.php" from my chunk of the list of 7000 wikis (e.g. http://ctl.mesacc.edu/wiki/api.php ).

Original issue: http://code.google.com/p/wikiteam/issues/detail?id=49

Create checker for launcher.py

From [email protected] on April 08, 2012 09:29:19

We need a script to check all downloads and fix them, as ArchiveTeam guys usually do and especially until issue 33 is fixed.
Currently, there's no way to check what task has actually been completed, for instance if a dump has been terminated with one of the usual silly errors like "OError: [Errno socket error] [Errno -2] Name or service not known" or if the 7z have not been created (as in my case, currently for all wikis).

Original issue: http://code.google.com/p/wikiteam/issues/detail?id=47

Images download fails often

From [email protected] on July 12, 2011 20:32:06

This could be a server stability problem and this bug could be considered invalid, but anyway...
While downloading strategywiki.org images, I often have to resume the dump because it fails. Here are two (identical) tracebacks: http://p.defau.lt/?kAhwwc8_kW5breIN7JGeSw http://p.defau.lt/?ZElZ2XUjVvrkTeOwiZpfVg Both were at about 4000 images, I'll check whether it's a pattern.
This is annoying mostly because you have to resume manually and starting the script is quite long and CPU-intensive while it checks previous run.

Original issue: http://code.google.com/p/wikiteam/issues/detail?id=27

Script looks for a title non existing in wiki nor titles list

From [email protected] on July 15, 2011 19:05:33

On a couple of wikis the script got stuck in an error loop because it was looking for a non existing page. The title was recurrent and strange, "AMF5LKE43MNFGHKSDMRTJ", so I thought they were just weird wikis with mirrored junk content.
On reviewing it, though, I discovered that the title never existed on those wikis and is not contained in the title list neither, and Special:Export/API work. Probably some temporary craziness of python or my machine which will be resolved by next run, but still worth tracking.
I attach titles list and here's the terminal output (nothing useful anywhere): http://p.defau.lt/?tr7ACOxyEH4oBmQcD0_JCg http://p.defau.lt/?2_qzc5xn0_9jFv_eN5HhGw

Attachment: wikidocorg-20110712-titles.txt.7z wikiznanieru_ru_wz-20110712-titles.txt.7z

Original issue: http://code.google.com/p/wikiteam/issues/detail?id=31

English Wikitravel fails to download images

From [email protected] on July 26, 2011 22:36:59

Checking api.php...
api.php is OK
Checking index.php...
index.php is OK
Analysing http://wikitravel.org/wiki/en/api.php Loading config file...
Resuming previous dump process...
Title list was completed in the previous session
XML dump was completed in the previous session
Image list is incomplete. Reloading...
Retrieving image filenames
Traceback (most recent call last):
File "dumpgenerator.py", line 1013, in
main()
File "dumpgenerator.py", line 936, in main
images = getImageFilenamesURL(config=config)
File "dumpgenerator.py", line 447, in getImageFilenamesURL
f = urllib2.urlopen(req)
File "/usr/lib/python2.7/urllib2.py", line 126, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 397, in open
response = meth(req, response)
File "/usr/lib/python2.7/urllib2.py", line 510, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python2.7/urllib2.py", line 435, in error
return self._call_chain(_args)
File "/usr/lib/python2.7/urllib2.py", line 369, in _call_chain
result = func(_args)
File "/usr/lib/python2.7/urllib2.py", line 518, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 500: Internal Server Error

Original issue: http://code.google.com/p/wikiteam/issues/detail?id=37

Large histories memory error

From [email protected] on April 16, 2011 23:35:03

RationalWiki:SPOV 18 edits
RationalWiki:Saloon Bar 1 edits
RationalWiki:Saloon Bar/Drink counter/Archive 1 2 edits
RationalWiki:Saloon bar 2000 edits
RationalWiki:Saloon bar 3000 edits
RationalWiki:Saloon bar 4000 edits
RationalWiki:Saloon bar 5000 edits
RationalWiki:Saloon bar 6000 edits
RationalWiki:Saloon bar 7000 edits
RationalWiki:Saloon bar 8000 edits
RationalWiki:Saloon bar 9000 edits
RationalWiki:Saloon bar 10000 edits
RationalWiki:Saloon bar 11000 edits
RationalWiki:Saloon bar 12000 edits
Traceback (most recent call last):
File "dumpgenerator.py", line 878, in
f.close()
File "dumpgenerator.py", line 785, in main
xmltitles = re.findall(r'<title>([^<]+)</title>', l) #weird if found more than 1, but maybe
File "dumpgenerator.py", line 335, in generateXMLDump
if c % 10 == 0:
File "dumpgenerator.py", line 279, in getXMLPage
xml = xml.split('')[0]+xml2.split('\n')[1]
MemoryError

Original issue: http://code.google.com/p/wikiteam/issues/detail?id=8

Wikimedia images can't be downloaded

From [email protected] on June 13, 2011 08:25:58

I tried to download the images from en.wikipedia.org.But it return errors.

I type "python dumpgenerator.py --api= http://en.wikipedia.org/w/api.php --images"
It return as followed:
Checking api.php...
api.php is OK
Checking index.php...
Error in index.php, please, provide a correct path to index.php

I type "python dumpgenerator.py --index= http://en.wikipedia.org/w/index.php --images"
It return as followed:
Checking index.php...
Error in index.php, please, provide a correct path to index.php

What's wrong with it?
Thank you!

Original issue: http://code.google.com/p/wikiteam/issues/detail?id=16

Sometimes pages in namespaces with strange aliases will not be found (wikiHow)

From [email protected] on July 10, 2011 10:30:38

I'm downloading the English wikiHow and the script finds 0 revisions for every main space talk.
For instance: http://www.wikihow.com/Discussion:Make-a-Robot-at-Home «The page "Discussion:Make a Robot at Home" was missing in the wiki (probably deleted)».
This is probably because they changed the name of the namespace in some weird way, so they always use Discussion: instead of Talk: (although the latter redirects to the former: http://www.wikihow.com/Talk:Make-a-Robot-at-Home ) and for some reason this won't work with API, although http://www.wikihow.com/index.php?title=Special:Export gives you the required pages.
wikiHow is known to patch/hack MediaWiki in a lot of improper ways, but I don't understand what could cause this problem.

Original issue: http://code.google.com/p/wikiteam/issues/detail?id=24

0 edits [downloaded]

From [email protected] on September 02, 2011 10:02:06

Sometimes, a downloaded pages is reported to have 0 edits. The history of a page can't be empty, so it doesn't mean anything: either the log should be corrected, or something needs to be done when this happens. If no revisions are actually downloaded, it could be just a temporary server problem (in my current downloading this happens occasionally with a single title or a cluster of them) and the script should retry. Some examples ( http://wikicafe.metacafe.com ):

Downloaded 345440 pages
4074553, 0 edits
4074554, 0 edits
4074555, 0 edits
4074556, 2 edits
4074557, 2 edits
4074559, 0 edits
4074560, 0 edits
4074561, 2 edits
4074562, 0 edits
4074563, 0 edits

Downloaded 351610 pages
4083509, 1 edits
4083510, 1 edits
4083511, 1 edits
4083515, 2 edits
4083516, 2 edits
4083519, 2 edits
408352, 2 edits
4083521, 2 edits
4083522, 0 edits
4083524, 2 edits

$ grep "<title>" *.xml -c;grep "" *.xml -c;grep "" *.xml -c;grep "" *.xml -c;grep "" *.xml -c
1751675
1751711
1751745
3114970
3115016

Original issue: http://code.google.com/p/wikiteam/issues/detail?id=38

dumpgenerator should follow redirects

From [email protected] on April 08, 2012 09:22:20

http://controls.engin.umich.edu/wiki/api.php (taken from the new list of 7000 wikis) redirects to HTTPS and I get

Checking api.php...
api.php is OK
Traceback (most recent call last):
File "dumpgenerator.py", line 1029, in
main()
File "dumpgenerator.py", line 838, in main
config, other = getParameters(params=params)
File "dumpgenerator.py", line 795, in getParameters
if checkIndexphp(config['index']):
File "dumpgenerator.py", line 818, in checkIndexphp
f = urllib2.urlopen(req)
File "/usr/lib/python2.7/urllib2.py", line 126, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 400, in open
response = meth(req, response)
File "/usr/lib/python2.7/urllib2.py", line 513, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python2.7/urllib2.py", line 432, in error
result = self._call_chain(_args)
File "/usr/lib/python2.7/urllib2.py", line 372, in _call_chain
result = func(_args)
File "/usr/lib/python2.7/urllib2.py", line 598, in http_error_302
new = self.redirect_request(req, fp, code, msg, headers, newurl)
File "/usr/lib/python2.7/urllib2.py", line 559, in redirect_request
raise HTTPError(req.get_full_url(), code, msg, headers, fp)
urllib2.HTTPError: HTTP Error 307: Temporary Redirect

Original issue: http://code.google.com/p/wikiteam/issues/detail?id=46

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.