Giter VIP home page Giter VIP logo

target-s3's Introduction

target-s3

target-s3 is inteded to be a multi-format/multi-cloud Singer target.

Build with the Meltano Target SDK.

Configuration

Accepted Config Options

{
    "format": {
        "format_type": "json",
        "format_parquet": {
            "validate": true|false
        },
        "format_json": {},
        "format_csv": {}
    },
    "cloud_provider": {
        "cloud_provider_type": "aws",
        "aws": {
            "aws_access_key_id": "test",
            "aws_secret_access_key": "test",
            "aws_region": "us-west-2",
            "aws_profile_name": "test-profile",
            "aws_bucket": "test-bucket",
            "aws_endpoint_override": "http://localhost:4566"
        }
    },
    "prefix": "path/to/output",
    "stream_name_path_override": "StreamName",
    "include_process_date": true|false,
    "append_date_to_prefix": true|false,
    "partition_name_enabled": true|false,
    "use_raw_stream_name": true|false,
    "append_date_to_prefix_grain": "day",
    "append_date_to_filename": true|false,
    "append_date_to_filename_grain": "microsecond",
    "flattening_enabled": true|false,
    "flattening_max_depth": int,
    "max_batch_age": int,
    "max_batch_size": int
}

format.format_parquet.validate [Boolean, default: False] - this flag determines whether the data types of incoming data elements should be validated. When set True, a schema is created from the first record and all subsequent records that don't match that data type are cast.

Capabilities

  • about
  • stream-maps
  • schema-flattening

Settings

Configure using environment variables

This Singer target will automatically import any environment variables within the working directory's .env if the --config=ENV is provided, such that config values will be considered if a matching environment variable is set either in the terminal context or in the .env file.

Usage

You can easily run target-s3 by itself or in a pipeline using Meltano.

Executing the Target Directly

target-s3 --version
target-s3 --help
# Test using the "Carbon Intensity" sample:
tap-carbon-intensity | target-s3 --config /path/to/target-s3-config.json

SDK Dev Guide

See the dev guide for more instructions on how to use the Meltano Singer SDK to develop your own Singer taps and targets.

target-s3's People

Contributors

alexmaras avatar crowemi avatar dmnpignaud avatar edgarrmondragon avatar luisvicenteatprima avatar pnadolny13 avatar prakharcode avatar rstml avatar vzzarr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

target-s3's Issues

Configure Compression Settings

I have tried target-s3 and I was able to change prefix and stream_name_path_override but inside the S3 folder I get a compressed ".json.gz". Instead, I need a file called "data.json" without compression.

config credentials not used yet

I noticed that I can set credentials in the config but they arent used anywhere yet. Setting them in the environment does seem to work.

bug: `TypeError: Object of type datetime is not JSON serializable`

I'm getting an error with format_type: json when theres a datetime value in my records.

2023-04-03 13:40:43,236 Target 'target-s3' is listening for input from tap.
2023-04-03 13:40:43,236 Initializing 'target-s3' target sink...
2023-04-03 13:40:43,236 Initializing target sink for stream 'Account'...
2023-04-03 13:40:43,237 Target 'target-s3' completed reading 4 lines of input (2 records, (0 batch manifests, 1 state messages).
2023-04-03 13:40:43,249 key: <my-bucket>/s3_loader/Account/2023/04/03/20230403
Traceback (most recent call last):
  File "/Users/pnadolny/Documents/Git/meltano_project/quickbooks_postgres/qb_pg/.meltano/loaders/target-s3/venv/bin/target-s3", line 8, in <module>
    sys.exit(Targets3.cli())
  File "/Users/pnadolny/Documents/Git/meltano_project/quickbooks_postgres/qb_pg/.meltano/loaders/target-s3/venv/lib/python3.9/site-packages/click/core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
  File "/Users/pnadolny/Documents/Git/meltano_project/quickbooks_postgres/qb_pg/.meltano/loaders/target-s3/venv/lib/python3.9/site-packages/click/core.py", line 1055, in main
    rv = self.invoke(ctx)
  File "/Users/pnadolny/Documents/Git/meltano_project/quickbooks_postgres/qb_pg/.meltano/loaders/target-s3/venv/lib/python3.9/site-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/Users/pnadolny/Documents/Git/meltano_project/quickbooks_postgres/qb_pg/.meltano/loaders/target-s3/venv/lib/python3.9/site-packages/click/core.py", line 760, in invoke
    return __callback(*args, **kwargs)
  File "/Users/pnadolny/Documents/Git/meltano_project/quickbooks_postgres/qb_pg/.meltano/loaders/target-s3/venv/lib/python3.9/site-packages/singer_sdk/target_base.py", line 564, in cli
    target.listen(file_input)
  File "/Users/pnadolny/Documents/Git/meltano_project/quickbooks_postgres/qb_pg/.meltano/loaders/target-s3/venv/lib/python3.9/site-packages/singer_sdk/io_base.py", line 35, in listen
    self._process_endofpipe()
  File "/Users/pnadolny/Documents/Git/meltano_project/quickbooks_postgres/qb_pg/.meltano/loaders/target-s3/venv/lib/python3.9/site-packages/singer_sdk/target_base.py", line 280, in _process_endofpipe
    self.drain_all(is_endofpipe=True)
  File "/Users/pnadolny/Documents/Git/meltano_project/quickbooks_postgres/qb_pg/.meltano/loaders/target-s3/venv/lib/python3.9/site-packages/singer_sdk/target_base.py", line 441, in drain_all
    self._drain_all(list(self._sinks_active.values()), self.max_parallelism)
  File "/Users/pnadolny/Documents/Git/meltano_project/quickbooks_postgres/qb_pg/.meltano/loaders/target-s3/venv/lib/python3.9/site-packages/singer_sdk/target_base.py", line 474, in _drain_all
    Parallel()(delayed(_drain_sink)(sink=sink) for sink in sink_list)
  File "/Users/pnadolny/Documents/Git/meltano_project/quickbooks_postgres/qb_pg/.meltano/loaders/target-s3/venv/lib/python3.9/site-packages/joblib/parallel.py", line 1098, in __call__
    self.retrieve()
  File "/Users/pnadolny/Documents/Git/meltano_project/quickbooks_postgres/qb_pg/.meltano/loaders/target-s3/venv/lib/python3.9/site-packages/joblib/parallel.py", line 975, in retrieve
    self._output.extend(job.get(timeout=self.timeout))
  File "/Users/pnadolny/.pyenv/versions/3.9.7/lib/python3.9/multiprocessing/pool.py", line 771, in get
    raise self._value
  File "/Users/pnadolny/.pyenv/versions/3.9.7/lib/python3.9/multiprocessing/pool.py", line 125, in worker
    result = (True, func(*args, **kwds))
  File "/Users/pnadolny/Documents/Git/meltano_project/quickbooks_postgres/qb_pg/.meltano/loaders/target-s3/venv/lib/python3.9/site-packages/joblib/_parallel_backends.py", line 620, in __call__
    return self.func(*args, **kwargs)
  File "/Users/pnadolny/Documents/Git/meltano_project/quickbooks_postgres/qb_pg/.meltano/loaders/target-s3/venv/lib/python3.9/site-packages/joblib/parallel.py", line 288, in __call__
    return [func(*args, **kwargs)
  File "/Users/pnadolny/Documents/Git/meltano_project/quickbooks_postgres/qb_pg/.meltano/loaders/target-s3/venv/lib/python3.9/site-packages/joblib/parallel.py", line 288, in <listcomp>
    return [func(*args, **kwargs)
  File "/Users/pnadolny/Documents/Git/meltano_project/quickbooks_postgres/qb_pg/.meltano/loaders/target-s3/venv/lib/python3.9/site-packages/singer_sdk/target_base.py", line 471, in _drain_sink
    self.drain_one(sink)
  File "/Users/pnadolny/Documents/Git/meltano_project/quickbooks_postgres/qb_pg/.meltano/loaders/target-s3/venv/lib/python3.9/site-packages/singer_sdk/target_base.py", line 461, in drain_one
    sink.process_batch(draining_status)
  File "/Users/pnadolny/Documents/Git/meltano_project/quickbooks_postgres/qb_pg/.meltano/loaders/target-s3/venv/lib/python3.9/site-packages/target_s3/sinks.py", line 48, in process_batch
    format_type_client.run()
  File "/Users/pnadolny/Documents/Git/meltano_project/quickbooks_postgres/qb_pg/.meltano/loaders/target-s3/venv/lib/python3.9/site-packages/target_s3/formats/format_json.py", line 20, in run
    return super().run(self.context['records'])
  File "/Users/pnadolny/Documents/Git/meltano_project/quickbooks_postgres/qb_pg/.meltano/loaders/target-s3/venv/lib/python3.9/site-packages/target_s3/formats/format_base.py", line 62, in run
    self._write()
  File "/Users/pnadolny/Documents/Git/meltano_project/quickbooks_postgres/qb_pg/.meltano/loaders/target-s3/venv/lib/python3.9/site-packages/target_s3/formats/format_json.py", line 16, in _write
    return super()._write(json.dumps(self.records))
  File "/Users/pnadolny/.pyenv/versions/3.9.7/lib/python3.9/json/__init__.py", line 231, in dumps
    return _default_encoder.encode(obj)
  File "/Users/pnadolny/.pyenv/versions/3.9.7/lib/python3.9/json/encoder.py", line 199, in encode
    chunks = self.iterencode(o, _one_shot=True)
  File "/Users/pnadolny/.pyenv/versions/3.9.7/lib/python3.9/json/encoder.py", line 257, in iterencode
    return _iterencode(o, 0)
  File "/Users/pnadolny/.pyenv/versions/3.9.7/lib/python3.9/json/encoder.py", line 179, in default
    raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type datetime is not JSON serializable

Create automated unit tests

Create automated unit tests which cover the following objects:

  • format_base
  • format_parquet
  • format_json

Tests should run as part of GitHub Actions and successful tests should be a requirement for merge.

bug: `PermanentRedirect Message`

I'm getting the following error when I try to use the parquet format type:

AWS Error UNKNOWN (HTTP status 301) during CreateMultipartUpload operation: Unable to parse ExceptionName: PermanentRedirect Message: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.

It looks like in this commit the logic for creating a file system changed. If I put it back to what it was I'm able to get it to work. I think the error is related to the region not properly being set

Also if I'm reading it correctly I think self.session gets instantiated on init each time so its never possible to get to

, is that right?

Empty file name when append_date_to_filename=False

Hello,

I'm using target-s3 within Meltano and want to overwrite the Parquet files I'm creating with every run.

I set the TARGET_S3_APPEND_DATE_TO_FILENAME and TARGET_S3_APPEND_DATE_TO_PREFIX to False but now I end up with files in S3 like so: bucket_name/prefix/stream_name/.parquet

It is still possible to query this file but feels a bit odd to me.

As you can see here the filename is only set when append_date_to_filename is True.

I would expect that when both append_date_to_filename and append_date_to_prefix is set to false the filename would be equal to the stream_name resulting in this: bucket_name/prefix/stream_name.parquet

Is there any reason behind it?

I could make a PR adjusting this if it is welcome.

Format jsonl not working

Hi, tried setting the format to jsonl like so:

format:
  format_type: jsonl
  format_parquet:
    validate: false

Got immediate error:
'jsonl' is not a valid choice for 'format.format_type'

Using Meltano Docker image latest

Thanks for all your hard work!!!

adding aws_session_token to config

To access AWS it is sometimes requested to use an aws_session_token as well.

I forked:
Vzzarr@25e556a

aws_session_token=aws_config.get("aws_session_token", None),

and this is working for me.
Would you please consider adding this to the code base or allow me to create a branch to PR?

Add a way to pass `endpoint_url` to target-s3

We are attempting to use target-s3 in a pipeline as it is capable of creating parquet files.

However as part of our development and QA processes we must to have a way of testing the pipeline. For this we would like to use either MinIO or localstack but this is currently not possible as target-s3 has no config for setting the endpoint_url.

Requested solution

Add endpoint_url config to work with the internal boto3 and pyarrow settings.

Many thanks.

bump sdk version to 0.32

working with older versions is a bit difficult especially with the flattening logic failing for some taps/streams. Is this in focus?

Support MAX_BATCH_SIZE env variable

The TARGET_S3_MAX_BATCH_SIZE env var only seems to work whenever I declare my own default max_batch_size value in the meltano.yml configuration. This doesn't seem to be the case for other environment variables such as TARGET_S3_APPEND_DATE_TO_FILENAME_GRAIN.

I'm guessing this is also related to how max_batch_size doesn't appear on the meltano hub documentation page or in the meltano config target-s3 set --interactive list.

Is this expected behavior? I'm on Meltano 2.20.0 for... reasons.

Partition formatting structure

Still me bothering you.

I tried to use this config

    "append_date_to_prefix": true,
    "append_date_to_prefix_grain": "day",

but the problem with this is that it creates partitions in the form of year/month/day and not year=year/month=month/day=day, for example: 2023/05/01 instead of year=2023/month=05/day=01, which is the idea day when scanning this data with other services like Redshift Spectrum.

Am I using wrongly the library and there is a parameter I'm missing or is this something that can be improved?

bug: `AttributeError: 'FormatParquet' object has no attribute 'file_iterator'`

When using the parquet type I get 1 successful file written to S3 but then it fails due to an AttributeError. Here is the full traceback:

2023-04-03 13:39:34,027 Target 'target-s3' is listening for input from tap.
2023-04-03 13:39:34,027 Initializing 'target-s3' target sink...
2023-04-03 13:39:34,027 Initializing target sink for stream 'Account'...
2023-04-03 13:39:34,028 Target 'target-s3' completed reading 4 lines of input (2 records, (0 batch manifests, 1 state messages).
2023-04-03 13:39:34,040 key: <my-bucket>/s3_loader/Account/2023/04/03/20230403
2023-04-03 13:39:35,485 Failed to write parquet file to S3.
Traceback (most recent call last):
  File "/Users/pnadolny/Documents/Git/meltano_project/quickbooks_postgres/qb_pg/.meltano/loaders/target-s3/venv/bin/target-s3", line 8, in <module>
    sys.exit(Targets3.cli())
  File "/Users/pnadolny/Documents/Git/meltano_project/quickbooks_postgres/qb_pg/.meltano/loaders/target-s3/venv/lib/python3.9/site-packages/click/core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
  File "/Users/pnadolny/Documents/Git/meltano_project/quickbooks_postgres/qb_pg/.meltano/loaders/target-s3/venv/lib/python3.9/site-packages/click/core.py", line 1055, in main
    rv = self.invoke(ctx)
  File "/Users/pnadolny/Documents/Git/meltano_project/quickbooks_postgres/qb_pg/.meltano/loaders/target-s3/venv/lib/python3.9/site-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/Users/pnadolny/Documents/Git/meltano_project/quickbooks_postgres/qb_pg/.meltano/loaders/target-s3/venv/lib/python3.9/site-packages/click/core.py", line 760, in invoke
    return __callback(*args, **kwargs)
  File "/Users/pnadolny/Documents/Git/meltano_project/quickbooks_postgres/qb_pg/.meltano/loaders/target-s3/venv/lib/python3.9/site-packages/singer_sdk/target_base.py", line 564, in cli
    target.listen(file_input)
  File "/Users/pnadolny/Documents/Git/meltano_project/quickbooks_postgres/qb_pg/.meltano/loaders/target-s3/venv/lib/python3.9/site-packages/singer_sdk/io_base.py", line 35, in listen
    self._process_endofpipe()
  File "/Users/pnadolny/Documents/Git/meltano_project/quickbooks_postgres/qb_pg/.meltano/loaders/target-s3/venv/lib/python3.9/site-packages/singer_sdk/target_base.py", line 280, in _process_endofpipe
    self.drain_all(is_endofpipe=True)
  File "/Users/pnadolny/Documents/Git/meltano_project/quickbooks_postgres/qb_pg/.meltano/loaders/target-s3/venv/lib/python3.9/site-packages/singer_sdk/target_base.py", line 441, in drain_all
    self._drain_all(list(self._sinks_active.values()), self.max_parallelism)
  File "/Users/pnadolny/Documents/Git/meltano_project/quickbooks_postgres/qb_pg/.meltano/loaders/target-s3/venv/lib/python3.9/site-packages/singer_sdk/target_base.py", line 474, in _drain_all
    Parallel()(delayed(_drain_sink)(sink=sink) for sink in sink_list)
  File "/Users/pnadolny/Documents/Git/meltano_project/quickbooks_postgres/qb_pg/.meltano/loaders/target-s3/venv/lib/python3.9/site-packages/joblib/parallel.py", line 1098, in __call__
    self.retrieve()
  File "/Users/pnadolny/Documents/Git/meltano_project/quickbooks_postgres/qb_pg/.meltano/loaders/target-s3/venv/lib/python3.9/site-packages/joblib/parallel.py", line 975, in retrieve
    self._output.extend(job.get(timeout=self.timeout))
  File "/Users/pnadolny/.pyenv/versions/3.9.7/lib/python3.9/multiprocessing/pool.py", line 771, in get
    raise self._value
  File "/Users/pnadolny/.pyenv/versions/3.9.7/lib/python3.9/multiprocessing/pool.py", line 125, in worker
    result = (True, func(*args, **kwds))
  File "/Users/pnadolny/Documents/Git/meltano_project/quickbooks_postgres/qb_pg/.meltano/loaders/target-s3/venv/lib/python3.9/site-packages/joblib/_parallel_backends.py", line 620, in __call__
    return self.func(*args, **kwargs)
  File "/Users/pnadolny/Documents/Git/meltano_project/quickbooks_postgres/qb_pg/.meltano/loaders/target-s3/venv/lib/python3.9/site-packages/joblib/parallel.py", line 288, in __call__
    return [func(*args, **kwargs)
  File "/Users/pnadolny/Documents/Git/meltano_project/quickbooks_postgres/qb_pg/.meltano/loaders/target-s3/venv/lib/python3.9/site-packages/joblib/parallel.py", line 288, in <listcomp>
    return [func(*args, **kwargs)
  File "/Users/pnadolny/Documents/Git/meltano_project/quickbooks_postgres/qb_pg/.meltano/loaders/target-s3/venv/lib/python3.9/site-packages/singer_sdk/target_base.py", line 471, in _drain_sink
    self.drain_one(sink)
  File "/Users/pnadolny/Documents/Git/meltano_project/quickbooks_postgres/qb_pg/.meltano/loaders/target-s3/venv/lib/python3.9/site-packages/singer_sdk/target_base.py", line 461, in drain_one
    sink.process_batch(draining_status)
  File "/Users/pnadolny/Documents/Git/meltano_project/quickbooks_postgres/qb_pg/.meltano/loaders/target-s3/venv/lib/python3.9/site-packages/target_s3/sinks.py", line 48, in process_batch
    format_type_client.run()
  File "/Users/pnadolny/Documents/Git/meltano_project/quickbooks_postgres/qb_pg/.meltano/loaders/target-s3/venv/lib/python3.9/site-packages/target_s3/formats/format_parquet.py", line 57, in run
    return super().run(self.context['records'])
  File "/Users/pnadolny/Documents/Git/meltano_project/quickbooks_postgres/qb_pg/.meltano/loaders/target-s3/venv/lib/python3.9/site-packages/target_s3/formats/format_base.py", line 62, in run
    self._write()
  File "/Users/pnadolny/Documents/Git/meltano_project/quickbooks_postgres/qb_pg/.meltano/loaders/target-s3/venv/lib/python3.9/site-packages/target_s3/formats/format_parquet.py", line 53, in _write
    raise e
  File "/Users/pnadolny/Documents/Git/meltano_project/quickbooks_postgres/qb_pg/.meltano/loaders/target-s3/venv/lib/python3.9/site-packages/target_s3/formats/format_parquet.py", line 50, in _write
    self.file_iterator += 1
AttributeError: 'FormatParquet' object has no attribute 'file_iterator'

File is overwritten with each sink operation

When using target-s3 with over 10k records, we've noticed that the only way to achieve the desired outcome is by using the "append_date_to_prefix_grain": "microsecond" option. This is due to the fact that each sink operation overwrites the generated file instead of appending the data. Consequently, if more than 10K data is sent by the tap to the target within a short time frame, some data may be lost during the file writing process. Could we consider modifying the command from "w" to "a"? Is there a particular reason for using the write operation? If so, it might be necessary to create a new file for each sink operation to prevent data loss.

The target-s3-parquet uses append mode to write data.
https://github.com/gupy-io/target-s3-parquet/blob/main/target_s3_parquet/sinks.py#L83C15-L83C25

collections.Mutable python3.10 support

Currently imports for collections.Mutable is not updated for Python 3.10. I'm more than happy to put forth a PR, or someone could just apply the import changes in formats/format_base.py

--- a/target_s3/formats/format_base.py                                                                                                                                                                                 
+++ b/target_s3/formats/format_base.py               
@@ -1,7 +1,6 @@
 import re
 import inflection
 import json
-import collections
 import logging
 from datetime import datetime
 from abc import ABCMeta, abstractmethod             
@@ -9,6 +8,11 @@ from abc import ABCMeta, abstractmethod
 from boto3 import Session
 from smart_open import open
  
+try:
+    from collections.abc import MutableMapping      
+except ImportError
+    from collections import MutableMapping
+
  
 LOGGER = logging.getLogger("target-s3")                                                                                                                                                                               
 DATE_GRAIN = {                                                                                                                                                                                                                                                                                                                             
@@ -175,7 +179,7 @@ class FormatBase(metaclass=ABCMeta):                                                                                                                                                               
         for k in sorted(d.keys()):                                                                                                                                                                                    
             v = d[k]                                                                                                                                                                                                  
             new_key = self.flatten_key(k, parent_key, sep)                                                                                                                                                            
-            if isinstance(v, collections.MutableMapping):                                                                                                                                                             
+            if isinstance(v, MutableMapping):                                                                                                                                                                         
                 items.extend(self.flatten_record(v, parent_key + [k], sep=sep).items())                                                                                                                               
             else:                                                                                                                                                                                                     
                 items.append((new_key, json.dumps(v) if type(v) is list else v))  

parquet schema wrongly inferred

if there's a nested object that is sparse in nature, meaning occurs infrequently in data. Then for that data the inferred schema would be wrong.

eg.
correct schema: ["attributes"]["teams"]["assigned"] = list<item: struct<email: string>>>
but if there's no data for the above attributes, due to how schema is currently inferred here the schema becomes:
list<item: null> which can be tricky if the files are then being loaded to spark for downstream consumption.

I see there's a comment already to build schema from json schema rather than inferring it which I believe is the right way.

Just opening this issue, so I can pick it up later.

OMG RAM Usage!

Hi, probably known issue but.. wow that's insane RAM usage! ๐Ÿคฃ

Reading the default 10K rows per batch it consumes 4 to 12 GB of memory in my case. Amazing!

Would love to trace the issue, hope I'll find the time.
So here's a place to discuss and track it.
Any pointers / comments / observations most welcome.

Thanks!

feat: implement csv writer

The csv format type logic still needs to be written. I think the current base format and smart open handle most of this for us already but we need to figure out the best way to pass in the csv config options for common use cases e.g. delimter, quotechar, encoding, etc.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.