Giter VIP home page Giter VIP logo

dee2's Introduction

DEE2

The aim of DEE2 is to make all RNA-seq data freely available to everyone. DEE2 consists of three parts:

  • Webserver where end-users can search for and obtain data-sets of interest
  • Pipeline that can download and process SRA data as well as users' own fastq files.
  • Back-end that collects, filters and organises data provided by contributing worker nodes.

DEE2 currently supports analysis of several major species including A. thaliana , C. elegans, D. melanogaster, D. rerio, E. coli, H. sapiens, M. musculus, R. norvegicus and S. cerevisiae. The DEE2 pipeline downloads data from SRA and processes it, providing tabulated data that can be used in downstream statistical analysis.

How can I access the processed data?

The processed data is available at http://dee2.io and can be also accessed using our specially developed R interface

If there is a particular dataset of interest missing from DEE2, you can use the request webform to have it expedited.

Want to learn more?

For information on different parts of the app, see the specific documentation:

Feedback, bug reports, and contributions to code development are very welcome.

dee2's People

Contributors

markziemann avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

dee2's Issues

minor bug

Downloaded: 1 files, 206 in 0.02s (8.27 KB/s)
++ '[' '!=' bbcb41eb861fff23d7882dc61725a6d7 ']'
/root/code/volunteer_pipeline.sh: line 1076: [: !=: unary operator expected
+++ grep ACCESSION= tmp.html
+++ cut -d = -f2
++ ACCESSION=ERR1157845
++ STAR --genomeLoad LoadAndExit --genomeDir ../ref/scerevisiae/ensembl/star

Singularity image is not working properly

#Command used
cd /projects/bd17/mziemann/dee2 ; module add singularity/2.3.1 ; nohup singularity run -H /scratch/bd17/mziemann/singularity/home/:/home/mziemann/ -B /scratch/bd17/mziemann/singularity/tmp/:/tmp/ /projects/bd17/mziemann/dee2/mziemann_tallyup-2017-12-20-ac3231f7e15a.img ecoli & tailf nohup.out

#shell in
singularity shell -H /scratch/bd17/mziemann/singularity/home/:/home/mziemann/ -B /scratch/bd17/mziemann/singularity/tmp/:/tmp/ /projects/bd17/mziemann/dee2/mziemann_tallyup-2017-12-20-ac3231f7e15a.img

-annoying perl locale error when using numsum/numaverage

-parallel fastq-dump sometimes works OK parallel runs but shows error with large datasets

[[A2018-01-15T11:29:59 fastq-dump.2.8.2 err: unknown while writing file within file system module - unknown system error errno='Cannot allocate memory(12)'
Read 599593 spots for SRR2086962.sra
Written 599593 spots for SRR2086962.sra
2018-01-15T11:30:00 fastq-dump.2.8.2 err: unknown while writing file within file system module - failed SRR2086962.sra
^[[A

An error occurred during processing.
A report was generated into the file '/home/mziemann//ncbi_error_report.xml'.
If the problem persists, you may consider sending the file
to '[email protected]' for assistance.

-skewer also running on 1 thread
-minion OK
-bowtie2 running on 1 thread hopelessly slow (hung sometimes) i suspect additional errors as well

-STAR killed
Jan 15 11:38:46 ..... started STAR run
Jan 15 11:38:46 ..... loading genome
Jan 15 11:38:46 ..... started mapping
/root/code/volunteer_pipeline.sh: line 26: 8869 Killed STAR --runThreadN $THREADS --quantMode GeneCounts --genomeLoad LoadAndKeep --outSAMtype None --genomeDir $STAR_DIR --readFilesIn=test_R2_clip4.fq
cut: ReadsPerGene.out.tab: No such file or directory

-kallisto running on 1 thread

domain name not IP address

to better future proof the app - direct traffic to a domain name, not the IP address. IP addresses change from time to time.

Some of the QC metrics don't make sense

Here are the metrics for ERR1521856 - mapping rates are >100%

QualityEncoding:
Read1MinimumLength:101
Read1MedianLength:101
Read1MaxLength:101
Read2MinimumLength:NULL
Read2MedianLength:NULL
Read2MaxLength:NULL
NumReadsTotal:48413790
NumReadsQcPass:15599846

QcPassRate:32.2219%
PE_Read1_StarMapRateTest:36
PE_Read2_StarMapRateTest:NA
PE_Read1_Excluded:FALSE
PE_Read2_Excluded:FALSE
MappingFormat:SE
STAR_UniqMappedReads:17070762
STAR_Strandedness:Unstranded
STAR_UnmappedReads:1265655
STAR_MultiMappedReads:28469676
STAR_NoFeatureReads:175369
STAR_AmbiguousReads:360374
STAR_AssignedReads:16535019
STAR_UniqMapRate:109.429%
STAR_AssignRate:105.995%
Kallisto_MappedReads:18057444
Kallisto_MapRate:115.754%

QC_SUMMARY:'

Some errors generated

some errors getting generated. some code needs to be tidied up.

ecoli
Starting pipeline
./volunteer_pipeline.sh: line 20: [: ==: unary operator expected
SRR5985593
--2018-02-19 10:59:02-- https://www.ncbi.nlm.nih.gov/sra/SRR5985593
Resolving www.ncbi.nlm.nih.gov (www.ncbi.nlm.nih.gov)... 130.14.29.110, 2607:f220:41e:4290::110
Connecting to www.ncbi.nlm.nih.gov (www.ncbi.nlm.nih.gov)|130.14.29.110|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: 'SRR5985593.html'

SRR5985593.html [ <=> ] 60.97K 140KB/s in 0.4s

2018-02-19 10:59:05 (140 KB/s) - 'SRR5985593.html' saved [62435]

User input species and SRA metadata match. OK.
md5sum: ./volunteer_pipeline.sh: No such file or directory
cp: cannot stat './volunteer_pipeline.sh': No such file or directory
Starting ./volunteer_pipeline.sh SRR5985593
current disk space = 385617816
free memory = 128381928
SRR5985593 check if SRA file exists and download if neccessary
SRR5985593.sra

Error with dee_pipeline.R

x2$QC_summary<-unlist(lapply(x2$SRR_accession , qc_analysis, org=org))
Error in read.table(QCFILE, stringsAsFactors = F) :
no lines available in input
In addition: Warning messages:
1: In FUN(X[[i]], ...) : NAs introduced by coercion
2: In FUN(X[[i]], ...) : NAs introduced by coercion
3: In FUN(X[[i]], ...) : NAs introduced by coercion

Function causing the error:
qc_analysis

Command causing the error:
read.table(QCFILE, stringsAsFactors = F)

If the file is empty, then the rea.table barfs and causes downstream problems.

Stricter data sanitation by the webserver (clean.sh)

For security reasons and sanity, more strict dataset checking is required before data is transferred to backend. For example, accession numbers need to be present in SRAdb and organism is correctly specified.

backend overloaded

The workload on the backend is too much, need to streamline dee_pipeline.R and dee_progress.R to make them faster/better.

Errors when running multiple parallel jobs in singularity image

Getting some errors when running multiple singularity instances in the same container. Have a feeling it could be the way the tmp.html file is created which collides between instances. A random tmpfile number would fix it.

Session Stop (Error: Server aborted session: No such file or directory)
ERR1204262.qc failed ascp download
ERR1204262.qc Validate the SRA file
ERR1204262.qc SRAfilesize
ERR1204262.qc.sra md5sums do not match. Deleting and exiting
54
ERR1216070.qc

adding: ERR1099520/ERR1099520.log (deflated 76%)
adding: ERR1099520/ERR1099520.qc (deflated 50%)
adding: ERR1099520/ERR1099520.se.tsv (deflated 81%)
adding: ERR1099520/volunteer_pipeline.sh (deflated 74%)
sftp> put ERR1099520.drerio.zip
Uploading ERR1099520.drerio.zip to /incoming/ERR1099520.drerio.zip
13
using user data
Error, not enough arguments specified. Quitting
14

Too much output text

The pipeline should by default run more quietly and only provide basic info (preparing index, start dataset, completed dataset). Verbosity level should be optioned.

enable https

users are receiving security warnings, need to install SSL cert to enable https.

some minor but annoying errors

wget: missing URL
Usage: wget [OPTION]... [URL]...

Try `wget --help' for more options.

As wells as verbose output of "cd -"
eg: /dee

Bug when dropping short read in a pair

The following error happens when one read of the pair is very short, and is excluded from the analysis. The remaining read should be reassigned but the renaming isn't working.

+ rm ERR1157768_2_fastqc.zip ERR1157768_2_fastqc.html
++ sed -n 2~4p ERR1157768_2.fastq
++ awk '{print length($1)}'
++ sort -g
++ head -1
+ FQ2_MIN_LEN=4
++ sed -n 2~4p ERR1157768_2.fastq
++ awk '{print length($1)}'
++ numaverage -M
+ FQ2_MEDIAN_LEN=4
++ sed -n 2~4p ERR1157768_2.fastq
++ awk '{print length($1)}'
++ sort -gr
++ head -1
+ FQ2_MAX_LEN=4
+ [[ 61 -lt 20 ]]
+ [[ 61 -ge 20 ]]
+ [[ 4 -lt 20 ]]
+ rm ERR1157768_2.fastq
++ echo ERR1157768_1.fastq
++ sed s/_1//
+ FQ1NEW=ERR1157768.fastq
+ mv ERR1157768_1.fastq ERR1157768.fastq
+ FQ1=ERR1157768.fastq
+ RDS=SE
+ [[ 61 -lt 20 ]]
+ echo ERR1157768 if colorspace, then quit
ERR1157768 if colorspace, then quit
+ '[' FALSE == TRUE ']'
+ echo ERR1157768 Dump the fastq file
ERR1157768 Dump the fastq file
+ rm ERR1157768.fastq
+ '[' FALSE == FALSE ']'
++ nproc
+ parallel-fastq-dump --threads 32 --outdir . --split-files --defline-qual + -s ERR1157768.sra
extra args: ['--split-files', '--defline-qual', '+']
spots: 87215
tempdir: /tmp/pfd_1hhcfnh6


du: cannot access 'ERR1157768.fastq': No such file or directory
+ FILESIZE=
+ FILESIZE=0
+ echo ERR1157768 file size 0
+ tee -a ERR1157768.log
ERR1157768 file size 0
+ rm ERR1157768.sra
+ '[' 0 -eq 0 ']'
+ echo ERR1157768 has no reads. Aborting
+ tee -a ERR1157768.attempts.txt
ERR1157768 has no reads. Aborting

CPU_SPEED not assigned correctly

Need to check that lscpu works from within the container

+ MEM=131297312
++ nproc
+ NUM_CPUS=32
++ lscpu
++ grep 'CPU max MHz:'
++ awk '{print $4}'
+ CPU_SPEED=

Pipeline MEM estimation

The error message calculated on line 1282 is wrong. For human it says 128GB ram is required but only ~60 is required.

echo Error, analysis of $ORG data requires at least $(echo $MEM_REQD $MEM_FACTOR | awk '{print $1*$2}') kB in RAM, but there is only $MEM available.

should be

echo Error, analysis of $ORG data requires at least $MEM_REQD kB in RAM, but there is only $MEM available.

Accession specification doesn't work

Tried to specify accessions below

docker run -it mziemann/tallyup /root/code/volunteer_pipeline.sh scerevisiae ERR1157768,ERR1157769,ERR1157770

But it looks like it is being ignored

+ MY_ORG=scerevisiae
+ MY_ACCESSIONS=ERR1157768,ERR1157769,ERR1157770
+ MEM_FACTOR=2
+ export -f main
+ cd /root
+ echo Dumping star genomes from memory
Dumping star genomes from memory
++ find /root/ref/
++ grep '/ensembl/star$'
++ sed 's#\/code\/\.\.##'
+ for DIR in '$(find ~/ref/ | grep /ensembl/star$ | sed '\''s#\/code\/\.\.##'\'' )'
+ echo /root/ref/ecoli/ensembl/star
/root/ref/ecoli/ensembl/star
+ STAR --genomeLoad Remove --genomeDir /root/ref/ecoli/ensembl/star
Oct 12 03:58:42 ..... started STAR run
Oct 12 03:58:42 ..... loading genome

EXITING: Did not find the genome in memory, did not remove any genomes from shared memory

Oct 12 03:58:42 ...... FATAL ERROR, exiting
++ free
++ awk '$1 ~ /Mem:/  {print $2-$3}'
+ MEM=131296920
++ nproc
+ NUM_CPUS=32
++ lscpu
++ grep 'CPU max MHz:'
++ awk '{print $4}'
+ CPU_SPEED=
+ ACC_URL=https://vm-118-138-241-34.erc.monash.edu.au/acc.html
+ ACC_REQUEST=https://vm-118-138-241-34.erc.monash.edu.au/cgi-bin/acc.sh
+ SFTP_URL=118.138.241.34
+ '[' '!' -z scerevisiae ']'
++ echo 'athaliana celegans dmelanogaster drerio ecoli hsapiens mmusculus rnorvegicus scerevisiae'
++ tr ' ' '\n'
++ grep -wc scerevisiae
+ ORG_CHECK=1
+ '[' 1 -ne 1 ']'
++ echo 'athaliana        2853904
celegans        2652204
dmelanogaster   3403644
drerio  14616592
ecoli   1576132
hsapiens        28968508
mmusculus       26069664
rnorvegicus     26913880
scerevisiae     1644684'
++ grep -w scerevisiae
++ awk -v f=2 '{print $2*f}'
+ MEM_REQD=3289368
+ '[' 3289368 -gt 131296920 ']'
+ '[' -z scerevisiae ']'
+ echo scerevisiae
scerevisiae
+ export -f myfunc
+ export -f key_setup
+ TESTFILE=test_pass
+ '[' '!' -r test_pass ']'
+ echo Starting pipeline
Starting pipeline
+ '[' 2 -eq 2 ']' -a '[' -z ']'
**/root/code/volunteer_pipeline.sh: line 1214: [: too many arguments**
+ count=1
+ '[' 1 -lt 30 ']'
+ ((  count++  ))
+ cd /root
+ echo 2
2
++ myfunc scerevisiae https://vm-118-138-241-34.erc.monash.edu.au/cgi-bin/acc.sh
++ MY_ORG=scerevisiae
++ ACC_REQUEST=https://vm-118-138-241-34.erc.monash.edu.au/cgi-bin/acc.sh
++ wget --no-check-certificate -r -O tmp.html 'https://vm-118-138-241-34.erc.monash.edu.au/cgi-bin/acc.sh?ORG=scerevisiae&Submit'
WARNING: combining -O with -r or -p will mean that all downloaded content
will be placed in the single file you specified.

--2017-10-12 03:58:43--  https://vm-118-138-241-34.erc.monash.edu.au/cgi-bin/acc.sh?ORG=scerevisiae&Submit
Resolving vm-118-138-241-34.erc.monash.edu.au (vm-118-138-241-34.erc.monash.edu.au)... 118.138.241.34
Connecting to vm-118-138-241-34.erc.monash.edu.au (vm-118-138-241-34.erc.monash.edu.au)|118.138.241.34|:443... connected.
WARNING: cannot verify vm-118-138-241-34.erc.monash.edu.au's certificate, issued by 'CN=vm-118-138-241-34.erc.monash.edu.au':
  Self-signed certificate encountered.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: 'tmp.html'

tmp.html                                                                [ <=>                                                                                                                                                                ]     206  --.-KB/s    in 0.03s   

2017-10-12 03:58:43 (5.92 KB/s) - 'tmp.html' saved [206]

FINISHED --2017-10-12 03:58:43--
Total wall clock time: 0.2s
Downloaded: 1 files, 206 in 0.03s (5.92 KB/s)
++ '[' '!=' bbcb41eb861fff23d7882dc61725a6d7 ']'
/root/code/volunteer_pipeline.sh: line 1069: [: !=: unary operator expected
+++ grep ACCESSION= tmp.html
+++ cut -d = -f2
++ ACCESSION=ERR1157783
++ STAR --genomeLoad LoadAndExit --genomeDir ../ref/scerevisiae/ensembl/star

EXITING because of FATAL ERROR: could not open genome file ../ref/scerevisiae/ensembl/star/genomeParameters.txt
SOLUTION: check that the path to genome files, specified in --genomeDir is correct and the files are present, and have user read permsissions

Oct 12 03:58:43 ...... FATAL ERROR, exiting
++ echo ERR1157783
+ ACCESSION=ERR1157783

error when specifying SRR accessions

Starting pipeline
++ echo ecoli ERR1450560,ERR1450561
++ grep -wc '-f'

  • OWN_DATA=0
  • '[' '!' -z 0 ']'
  • echo own data specified
    own data specified

Some projects have >500 datasets and are not readily accessible via browser

The project data can be packaged for users as a zip file. Here is the number of projects that could benefit from packaging (100 or more datasets).

$ for TSV in *tsv.cut ; do echo -n "$TSV " ; cut -f5 $TSV | sort | uniq -c | awk '$1>100' | wc -l ; done | column -t
0athaliana_metadata.tsv.cut 29
celegans_metadata.tsv.cut 12
dmelanogaster_metadata.tsv.cut 26
drerio_metadata.tsv.cut 36
ecoli_metadata.tsv.cut 0
hsapiens_metadata.tsv.cut 422
mmusculus_metadata.tsv.cut 432
rnorvegicus_metadata.tsv.cut 17
scerevisiae_metadata.tsv.cut 17

Improving ability to use metadata

  1. Add experiment.title to short metadata in the search results
  2. Provide short and long metadata in the downloaded zipfiles and on the bulk page
  3. Give the option to download the search results, no matter how many results are found.

continuous webserver testing

An alert is required when the webserver is not available or doesn't work properly

  • Test if webpage is available

  • Test accesssion numbers and keywords

  • Test whether datasets are available

Number of mouse transcripts in annotation

Hello,

I was wondering about the annotation version you were using for processing mouse experiments using Kallisto. Ensembl 90 annotation has 131,195 unique transcripts; however, the cDNA file you've used only contains 109,282. Could you tell why is that, and why some of the transcripts were dropped?

Thank you!

error when processing own data

ran the following command:

$ docker run -v $(pwd):/mnt -it mziemann/tallyup /root/code/volunteer_pipeline.sh ecoli -f e-BC24_CB2EYANXX_ATTCCT_L001_R1.fastq.gz e-BC24_CB2EYANXX_ATTCCT_L002_R1.fastq.gz

and got the following error:

+ echo Processing fastq files /mnt/e-BC24_CB2EYANXX_ATTCCT_L001_R1.fastq.gz and /mnt/e-BC24_CB2EYANXX_ATTCCT_L002_R1.fastq.gz in paired end mode
Processing fastq files /mnt/e-BC24_CB2EYANXX_ATTCCT_L001_R1.fastq.gz and /mnt/e-BC24_CB2EYANXX_ATTCCT_L002_R1.fastq.gz in paired end mode
+ FQ1=/mnt/e-BC24_CB2EYANXX_ATTCCT_L001_R1.fastq.gz
+ FQ2=/mnt/e-BC24_CB2EYANXX_ATTCCT_L002_R1.fastq.gz
++ basename /mnt/e-BC24_CB2EYANXX_ATTCCT_L001_R1.fastq.gz
++ cut -d . -f1
+ SRR=e-BC24_CB2EYANXX_ATTCCT_L001_R1
+ RDS=PE
+ '[' '!' -r /mnt/e-BC24_CB2EYANXX_ATTCCT_L001_R1.fastq.gz ']'
+ '[' '!' -r /mnt/e-BC24_CB2EYANXX_ATTCCT_L002_R1.fastq.gz ']'
+ mkdir e-BC24_CB2EYANXX_ATTCCT_L001_R1
+ cp /root/code/volunteer_pipeline.sh e-BC24_CB2EYANXX_ATTCCT_L001_R1
+ cd e-BC24_CB2EYANXX_ATTCCT_L001_R1
+ cp /mnt/e-BC24_CB2EYANXX_ATTCCT_L001_R1.fastq.gz /mnt/e-BC24_CB2EYANXX_ATTCCT_L002_R1.fastq.gz .
++ basename /mnt/e-BC24_CB2EYANXX_ATTCCT_L001_R1.fastq.gz
+ FQ1=e-BC24_CB2EYANXX_ATTCCT_L001_R1.fastq.gz
++ basename /mnt/e-BC24_CB2EYANXX_ATTCCT_L002_R1.fastq.gz
+ FQ2=e-BC24_CB2EYANXX_ATTCCT_L002_R1.fastq.gz
+ echo 'Starting /root/code/volunteer_pipeline.sh e-BC24_CB2EYANXX_ATTCCT_L001_R1
      current disk space = 842849864
      free memory = 131294344 '
+ tee -a e-BC24_CB2EYANXX_ATTCCT_L001_R1.log
Starting /root/code/volunteer_pipeline.sh e-BC24_CB2EYANXX_ATTCCT_L001_R1
      current disk space = 842849864
      free memory = 131294344 
++ echo e-BC24_CB2EYANXX_ATTCCT_L001_R1.fastq.gz e-BC24_CB2EYANXX_ATTCCT_L002_R1.fastq.gz
++ tr ' ' '\n'
++ grep -c '.gz$'
+ ISGZ=2
+ '[' 2 -eq 2 ']'
+ pigz -t e-BC24_CB2EYANXX_ATTCCT_L001_R1.fastq.gz
+ pigz -t e-BC24_CB2EYANXX_ATTCCT_L002_R1.fastq.gz
+ GZTEST=OK
+ '[' OK == OK ']'
+ pigz -d e-BC24_CB2EYANXX_ATTCCT_L001_R1.fastq.gz e-BC24_CB2EYANXX_ATTCCT_L002_R1.fastq.gz
++ basename e-BC24_CB2EYANXX_ATTCCT_L001_R1.fastq.gz .gz
+ FQ1=e-BC24_CB2EYANXX_ATTCCT_L001_R1.fastq
++ basename e-BC24_CB2EYANXX_ATTCCT_L002_R1.fastq.gz .gz
+ FQ2=e-BC24_CB2EYANXX_ATTCCT_L002_R1.fastq
++ echo e-BC24_CB2EYANXX_ATTCCT_L001_R1.fastq e-BC24_CB2EYANXX_ATTCCT_L002_R1.fastq
++ tr ' ' '\n'
++ grep -c '.bz2$'
+ ISBZ=0
+ '[' 0 -eq 2 ']'
++ echo e-BC24_CB2EYANXX_ATTCCT_L001_R1.fastq
++ egrep -c '(.fq$|.fastq$)'
+ ISFQ=1
+ '[' 1 -ne 2 ']'
+ echo Error. Unknown input file format. Input file extension should match .fastq or '.fq$'
Error. Unknown input file format. Input file extension should match .fastq or .fq$
+ exit1
+ rm e-BC24_CB2EYANXX_ATTCCT_L001_R1.fastq e-BC24_CB2EYANXX_ATTCCT_L002_R1.fastq '*.sra' '*tsv'
rm: cannot remove '*.sra': No such file or directory
rm: cannot remove '*tsv': No such file or directory
+ return 1
+ '[' 2 -eq 1 ']'
+ '[' -z PE ']'
+ exit

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.