Giter VIP home page Giter VIP logo

mysql-statsd's Introduction


Deprication

This project is no longer supported by Spil Games and has been adopted by DB-Art.

The new repository can be found here: MySQL-StatsD @ DB-Art

mysql-statsd

Daemon that gathers statistics from MySQL and sends them to statsd.

Usage / Installation

Install mysql_statsd through pip(pip is a python package manager, please don't use sudo!):

pip install mysql_statsd

If all went well, you'll now have a new executable called mysql_statsd in your path.

Running mysql_statsd

$ mysql_statsd --config /etc/mysql-statsd.conf

Assuming you placed a config file in /etc/ named mysql-statsd.conf

See our example configuration or read below about how to configure

Running the above command will start mysql_statsd in deamon mode. If you wish to see it's output, then run the command with -f / --foreground

Usage

$ mysql_statsd --help
usage: mysql_statsd.py [-h] [-c FILE] [-d] [-f]

optional arguments:
  -h, --help            show this help message and exit
  -c FILE, --config FILE
                        Configuration file
  -d, --debug           Prints statsd metrics next to sending them
  --dry-run             Print the output that would be sent to statsd without
                        actually sending data somewhere
  -f, --foreground      Dont fork main program

At the moment there is also a deamon script for this package

You're more than welcome to help us improve it!

Platforms

We would love to support many other kinds of database servers, but currently we're supporting these:

  • MySQL 5.1
  • MySQL 5.5
  • Galera

Both MySQL versions supported with Percona flavour as well as vanilla.

Todo:

Support for the following platforms

  • Mysql 5.6
  • MariaDB

We're looking forward to your pull request for other platforms

Development installation

To install package, setup a python virtual environment

Install the requirements(once the virtual environment is active):

pip install -r requirements.txt

NOTE: MySQL-Python package needs mysql_config command to be in your path.

There are future plans to replace the mysql-python package with PyMySQL

After that you're able to run the script through

$ python mysql_statsd/mysql_statsd.py

Coding standards

We like to stick with the python standard way of working: PEP-8

Configuration

The configuration consists out of four sections:

  • daemon specific (log/pidfiles)
  • statsd (host, port, prefixes)
  • mysql (connecitons, queries, etc)
  • metrics (metrics to be stored including their type)

Daemon

The daemon section allows you to set the paths to your log and pic files

Statsd

The Statsd section allows you to configure the prefix and hostname of the metrics. In our example the prefix has been set to mysql and the hostname is included. This will log the status.com_select metric to: mysql.<hostname>.status.com_select

You can use any prefix that is necessary in your environment.

MySQL

The MySQL section allows you to configure the credentials of your mysql host (preferrably on localhost) and the queries + timings for the metrics. The queries and timings are configured through the stats_types configurable, so take for instance following example:

stats_types = status, innodb

This will execute both the query_status and query_innodb on the MySQL server. The frequency can then be controlled through the time (in milliseconds) set in the interval_status and interval_innodb. The complete configuration would be:

stats_types = status, innodb
query_status = SHOW GLOBAL STATUS
interval_status = 1000
query_innodb = SHOW ENGINE INNODB STATUS
interval_innodb = 10000

A special case is the query_commit: as the connection opened by mysql_statsd will be kept open and auto commit is turned off by default the status variables are not updated if your server is set to REPEATABLE_READ transaction isolation. Also most probably your history_list will skyrocket and your ibdata files will grow fast enough to drain all available diskspace. So when in doubt about your transaction isolation: do include the query_commit!

Now here is the interesting part of mysql_statsd: if you wish to keep track of your own application data inside your application database you could create your own custom query this way. So for example:

stats_types = myapp
query_myapp = SELECT some_metric_name, some_metric_value FROM myapp.metric_table WHERE metric_ts >= DATE_SUB(NOW(), interval 1 MINUTE)
interval_myapp = 60000

This will query your application database every 60 seconds, fetch all the metrics that have changed since then and send them through StatsD. Obviously you need to whitelist them via the metrics section below.

Metrics

The metrics section is basically a whitelisting of all metrics you wish to send to Graphite via StatsD. Currently there is no possibilty to whitelist all possible metrics, but there is a special case where we do allow wildcarding: for the bufferpool_* we whitelist all bufferpools with that specific metric. Don't worry if you haven't configured multiple bufferpools: the output will be omitted by InnoDB and also not parsed by the preprocessor.

Important to know about the metrics is that you will have to specify what type they are. By default Graphite stores all metric equaly but treats them differently per type:

  • Gauge (g for gauge)
  • Rate (r for raw, d for delta)
  • Timer (t for timer)

Gauges are sticky values (like the spedometer in your car). Rates are the number of units that need to be translated to units per second. Timers are the time it took to perform a certain task.

An ever increasing value like the com_select can be sent various ways. If you wish to retain the absolute value of the com_select it is advised to configure it as a gauge. However if you are going to use it as a rate (queries per second) it is no use storing it as a rate in the first place and then later on calculate the integral of the gauge to get the rate. It would be far more accurate to store it as a rate in the first place.

Keep in mind that sending the com_select value as a raw value is in this case a bad habit: StatsD will average out the collected metrics per second, so sending within a 10 second timeframe 10 times a value of 1,000,000 will average out to the expected 1,000,000. However as the processing of metrics also takes a bit of time the chance of missing one beat is relatively high and you end up sending only 9 times the value, hence averaging out to 900,000 once in a while.

The best way to configure the com_select to a rate is by defining it as a delta. The delta metric will remember the metric as it was during the previous run and will only send the difference of the two values.

Media:

Art gave a talk about this tool at Percona London 2013: http://www.percona.com/live/mysql-conference-2013/sessions/mysql-performance-monitoring-using-statsd-and-graphite

Contributors

spil-jasper

thijsdezoete

art-spilgames

bnkr

mysql-statsd's People

Contributors

art-spilgames avatar banpei-dbart avatar bnkr avatar dantezhu avatar giuliano108 avatar ktilcu avatar simonklee avatar spil-jasper avatar spil-rkoopmans avatar thijsdezoete avatar thomasleveil avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mysql-statsd's Issues

Multiple buffer pool instances

If one is having multiple buffer pool instances (innodb_buffer_pool_instances), metrics sent to statsd regarding the buffer pool will mostly be the value of the last instance.
I believe the best way is to check if that variable is already initialized in the dictionary.

Sending rates

Hi guys,

how did you solve sending rates to graphite instead of ever incrementing counters. I mean for example value for "Innodb_rows_read". Statsd by itself doesn't have implemented sending rates and using derive function in graphite is not a good solution (cause problems with missing data points).

Thank you,

Vitek

Please give this repo some love.

I don't get why your repo doesn't have PR as there are no alternative to monitor mysql / mariadb through statds/graphite. Your initiative is a good one (though as a PHP developper, can only thank you and not contribute).

Maybe talk a little bit more about the project on twitter? :)

You should consider at least writing documentation, maybe it'll help user to contribute.

❤️

huge dips in data being sent to graphite

First thank you for this great way to send stats.

I'm having some issues with all my graphs in Graphite having huge dips in the data. Here's a sample of two showing data from a 15min period.

select render 1

I checked the raw data through the rawData=true command and it confirms the data not being consistent.

9299850.3
9300506.3
9301359.7
8371844.9
9302794.1
9303719.2
9304351.2

Any idea what is causing this and how I can fix it?

Thanks!

The pypi version is outdated

Exec pip install mysql-statsd and compare this file(mysql_statsd/thread_mysql.py) with the master.

Pypi file appears to be out off sync with the repo.

mysql_statsd listens on UDP:20981 - why?

I installed mysql_statsd via pip and just noticed that when it runs it listens on *:20981 - why does it do that? I cannot find anything in the code or in the docs to indicate that it listens for anything.
Also is there a way to bind it to a specifig IP?

Cannot start mysql_statsd, get

Hi,

I'm trying to get mysql_statsd to run on an ansible-deployed server.

In order to do so I've had to mess around with users and installing the system through pip, since it complains when I try to install it as root.
So what I've done now is run the pip installer as a dedicated user 'statsd', and set up a service file that calls the pip managed mysql_statsd file in the user's home dir.

The service is called like so:

[Unit]
Name = statsd
Description = mysql statsd

[Service]
ExecStart=/home/statsd/.local/bin/mysql_statsd --config /etc/mysql_statsd.conf
Restart=always
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=mysql_statsd
WorkingDirectory=/home/statsd
User=statsd
Group=statsd
RestartSec=10
# Environment=NODE_ENV=production PORT=5000

[Install]
WantedBy=multi-user.target

with the configuration file (in /etc/, that does seem to work)

[daemon]
logfile = /home/statsd/mysql_statsd/log/daemon.log
pidfile = /home/statsd/mysql_statsd.pid

[statsd]
host = {{ mariadb_statsd_host }}
port = {{ mariadb_statsd_port }}
prefix = mysql
include_hostname = true

[mysql]
; host = localhost
socket = /tmp/mysql.sock
username = {{ mariadb_statsd_user }}
password = {{ mariadb_statsd_pass }}
stats_types = status,variables,innodb,slave
query_variables = SHOW GLOBAL VARIABLES
interval_variables = 1000
query_status = SHOW GLOBAL STATUS
interval_status = 1000
query_innodb = SHOW ENGINE INNODB STATUS
interval_innodb = 1000
query_slave = SHOW SLAVE STATUS
interval_slave = 1000
query_commit = COMMIT
interval_commit = 1000
sleep_interval = 0


[metrics]
; g = gauge, c = counter (increment), t = timer, r = raw value, d = delta
variables.max_connections = g
status.max_used_connections = g
status.connections = d
status.aborted_connects = d
status.open_tables = g
status.open_files = g
status.open_streams = g
status.opened_tables = d
status.queries = d
status.empty_queries = d
status.slow_queries = d
status.questions = d
status.com_select = d
status.com_insert = d
status.com_update = d
status.com_delete = d
status.com_insert_select = d
status.qcache_queries_in_cache = g
status.qcache_inserts = d
status.qcache_hits = d
status.qcache_prunes = d
status.qcache_not_cached = d
status.qcache_lowmem_prunes = d
status.qcache_free_memory = g
status.qcache_free_blocks = g
status.qcache_total_blocks = g
status.flush_commands = g
status.created_tmp_disk_tables = d
status.created_tmp_tables = d
status.threads_running = g
status.threads_created = d
status.threads_connected = g
status.threads_cached = g
status.wsrep_flow_control_sent = g
status.wsrep_flow_control_recv = g
status.wsrep_local_sent_queue = g
status.wsrep_local_recv_queue = g
status.wsrep_cert_deps_distance = g
status.wsrep_local_cert_failures = d
status.rep_local_bf_aborts = d
status.wsrep_last_committed = d
status.wsrep_flow_control_paused = g
innodb.spin_waits = d
innodb.spin.rounds = d
innodb.os_waits = d
innodb.spin_rounds = d
innodb.os_waits = d
innodb.pending_normal_aio_reads = g
innodb.pending_normal_aio_writes = g
innodb.pending_ibuf_aio_reads = g
innodb.pending_aio_log_ios = g
innodb.pending_aio_sync_ios = g
innodb.pending_log_flushes = g
innodb.pending_buf_pool_flushes = g
innodb.pending_log_writes = g
innodb.pending_chkp_writes = g
innodb.file_reads = d
innodb.file_writes = d
innodb.file_fsyncs = d
innodb.ibuf_inserts = d
innodb.ibuf_merged = d
innodb.ibuf_merges = d
innodb.log_bytes_written = d
innodb.unflushed_log = g
innodb.log_bytes_flushed = d
innodb.log_writes = d
innodb.pool_size = g
innodb.free_pages = g
innodb.database_pages = g
innodb.modified_pages = g
innodb.pages_read = d
innodb.pages_created = d
innodb.pages_written = d
innodb.queries_inside = d
innodb.queries_queued = d
innodb.read_views = d
innodb.rows_inserted = d
innodb.rows_updated = d
innodb.rows_deleted = d
innodb.rows_read = d
innodb.innodb_transactions = d
innodb.unpurged_txns = d
innodb.history_list = g
innodb.current_transactions = g
innodb.active_transactions = g
innodb.locked_transactions = g
innodb.innodb_locked_tables = g
innodb.innodb_tables_in_use = g
innodb.read_views = g
innodb.hash_index_cells_total = g
innodb.hash_index_cells_used = g
innodb.total_mem_alloc = d
innodb.additional_pool_alloc = d
innodb.last_checkpoint = d
innodb.uncheckpointed_bytes = g
innodb.ibuf_used_cells = g
innodb.ibuf_free_cells = g
innodb.ibuf_cell_count = g
innodb.adaptive_hash_memory = g

slave.seconds_behind_master = g

; innodb.bufferpool_*.<metric> will whitelist these metrics for all bufferpool instances
; If you don't have multiple bufferpools it won't do anything
innodb.bufferpool_*.pool_size = g
innodb.bufferpool_*.pool_size_bytes = g
innodb.bufferpool_*.free_pages = g
innodb.bufferpool_*.database_pages = g
innodb.bufferpool_*.old_database_pages = g
innodb.bufferpool_*.modified_pages = g
innodb.bufferpool_*.pending_reads = g
innodb.bufferpool_*.pending_writes_lru = g
innodb.bufferpool_*.pending_writes_flush_list = g
innodb.bufferpool_*.pending_writes_single_page = g
innodb.bufferpool_*.pages_made_young = d
innodb.bufferpool_*.pages_not_young = d
innodb.bufferpool_*.pages_made_young_ps = g
innodb.bufferpool_*.pages_not_young_ps = g
innodb.bufferpool_*.pages_read = d
innodb.bufferpool_*.pages_created = d
innodb.bufferpool_*.pages_written = d
innodb.bufferpool_*.pages_read_ps = g
innodb.bufferpool_*.pages_created_ps = g
innodb.bufferpool_*.pages_written_ps = g
innodb.bufferpool_*.buffer_pool_hit_total = g
innodb.bufferpool_*.buffer_pool_hits = g
innodb.bufferpool_*.buffer_pool_young = g
innodb.bufferpool_*.buffer_pool_not_young = g
innodb.bufferpool_*.pages_read_ahead = g
innodb.bufferpool_*.pages_read_evicted = g
innodb.bufferpool_*.pages_read_random = g
innodb.bufferpool_*.lru_len = g
innodb.bufferpool_*.lru_unzip = g
innodb.bufferpool_*.io_sum = d
innodb.bufferpool_*.io_sum_cur = g
innodb.bufferpool_*.io_unzip = d
innodb.bufferpool_*.io_unzip_cur = g

Now, the daemon does start and the mysql_statsd script seems to pick up the configuration just fine, but it continually generates errors for all the threads it spawns. Then they all die, systemd restarts the entire service 10 seconds later and the same thing happens all over again.

I have no idea how to debug this as it's all python code that I'm not good at.

These are the errors that are put in the daemon.log file (specified in the config, that's how I know it's being read):

Caught CTRL+C / SIGKILL
Stopping threads
Waiting for 3 threads
All threads stopped
<mysql_statsd.mysql_statsd.MysqlStatsd instance at 0x7f7ab181acf8>
Caught CTRL+C / SIGKILL
Stopping threads
Waiting for 2 threads
All threads stopped
<mysql_statsd.mysql_statsd.MysqlStatsd instance at 0x7fc25eba7cf8>
Exception in thread Thread-2:
Traceback (most recent call last):
  File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
    self.run()
TypeError: 'bool' object is not callable

Caught CTRL+C / SIGKILL
Stopping threads
Waiting for 2 threads
All threads stopped
<mysql_statsd.mysql_statsd.MysqlStatsd instance at 0x7f809f130cf8>
Exception in thread Thread-2:
Traceback (most recent call last):
  File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
    self.run()
TypeError: 'bool' object is not callable

Caught CTRL+C / SIGKILL
Stopping threads
Waiting for 3 threads
All threads stopped
<mysql_statsd.mysql_statsd.MysqlStatsd instance at 0x7f43503f0cf8>
Caught CTRL+C / SIGKILL
Stopping threads
Waiting for 3 threads
All threads stopped
<mysql_statsd.mysql_statsd.MysqlStatsd instance at 0x7fae44e33cf8>
Caught CTRL+C / SIGKILL
Stopping threads
Waiting for 3 threads
All threads stopped
<mysql_statsd.mysql_statsd.MysqlStatsd instance at 0x7f9296f52cf8>

and finally the (I believe) relevant sections of the ansible script:

- name: Add statsd group
  group:
    name: statsd
    system: yes

- name: Add statsd user
  user:
    name: statsd
    shell: /bin/bash
    group: statsd
    home: /home/statsd
    system: yes

- name: install pymysql dependencies
  apt:
    name: default-libmysqlclient-dev
    state: present

- name: copy statsd configuration file
  template:
    src: mysql-statsd.conf.j2
    dest: /etc/mysql_statsd.conf
    owner: root
    group: root
    mode: 0744

- name: Install statsd
  become: yes
  become_user: statsd
  pip:
    name: ["mysql_statsd"]
    state: present

- name: install statsd service file
  copy:
    src: statsd.service
    dest: /etc/systemd/system/statsd.service
    mode: 0644
    owner: root
    group: root

- name: create log directory
  file:
    path: /home/statsd/mysql_statsd/log/
    state: directory
    owner: statsd
    group: statsd
    mode: 0755

Is there any way to figure out where things are going wrong?

If you need any more information I'd be happy to provide it.
Thanks!

TypeError: unsupported operand type(s) for -: 'str' and 'str'

Hello

I'm using the following veriosn of mysql-statsd on CentOS 6.

# pip list | grep mysql-statsd
mysql-statsd (0.1.4)

When I run mysql_statsd I got the following error after few seconds

# mysql_statsd --config /root/statsd-mysql/mysql-statsd.conf -d --foreground
('status.aborted_connects', '1', 'd')
('status.com_delete', '707701', 'd')
('status.com_insert', '707700', 'd')
('status.com_insert_select', '0', 'd')
...
...
...
('status.threads_created', '90', 'd')
('status.threads_running', '17', 'g')
('variables.max_connections', '151', 'g')
Exception in thread Thread-1:
Traceback (most recent call last):
  File "/usr/lib64/python2.6/threading.py", line 532, in __bootstrap_inner
    self.run()
  File "/usr/lib/python2.6/site-packages/mysql_statsd/thread_mysql.py", line 149, in run
    self._run()
  File "/usr/lib/python2.6/site-packages/mysql_statsd/thread_mysql.py", line 92, in _run
    rows = self._preprocess(check_type, column_names, cursor.fetchall())
  File "/usr/lib/python2.6/site-packages/mysql_statsd/thread_mysql.py", line 131, in _preprocess
    return executing_class.process(rows, *extra_args)
  File "/usr/lib/python2.6/site-packages/mysql_statsd/preprocessors/innodb_preprocessor.py", line 66, in process
    self.process_line(line)
  File "/usr/lib/python2.6/site-packages/mysql_statsd/preprocessors/innodb_preprocessor.py", line 207, in process_line
    self.tmp_stats['unpurged_txns'] = self.tmp_stats['innodb_transactions'] - self.make_bigint(innorow[6], innorow[7])
TypeError: unsupported operand type(s) for -: 'str' and 'str'

ThreadStatsd crashes when mysql replication thread is stopped

ThreadStatsd is crashing when SQL Slave is down, because slave.seconds_behind_master is Null

Exception in thread Thread-2:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 552, in __bootstrap_inner
self.run()
File "/home/ex/mysql-statsd/mysql_statsd/thread_statsd.py", line 71, in run
self.send_stat(item)
File "/home/ex/mysql-statsd/mysql_statsd/thread_statsd.py", line 55, in send_stat
sender(k, float(v))
TypeError: float() argument must be a string or a number

Here is workaround:
try:
sender(k, float(v))
except Exception, e:
print "k", k, "v", v
print e

$ python mysql_statsd/mysql_statsd.py -d -f -c ~ex/mysql-statsd.conf
k slave.seconds_behind_master v None
float() argument must be a string or a number

Can't install via easy_install

I'm executing:

easy_install mysql-statsd

and getting:

No local packages or download links found for mysql-statsd
error: Could not find suitable distribution for Requirement.parse('mysql-statsd')

Incompatibility with Percona output for Innodb Status

Percona has additions in innodb output:

--------------
ROW OPERATIONS
--------------
0 queries inside InnoDB, 0 queries in queue
1 read views open inside InnoDB
1 transactions active inside InnoDB
1 out of 1000 descriptors used
---OLDEST VIEW---
Normal read view
Read view low limit trx n:o EA05374F2
Read view up limit trx id EA05374F1
Read view low limit trx id EA05374F2
Read view individually stored trx ids:
Read view trx id EA05374F1
-----------------
Main thread process no. 28092, id 140035932575488, state: flushing log
Number of rows inserted 47675820507, updated 5833120651, deleted 10936909882, read 501029518850
0.00 inserts/s, 43442.28 updates/s, 0.00 deletes/s, 76878.06 reads/s
------------
TRANSACTIONS
------------

In this case the line '-----------------' after OLDEST VIEW is treated as beginning of chunk name and TRANSACTION as a chunk text.

Solution:

# diff -u innodb_preprocessor.py.orig innodb_preprocessor.py
--- innodb_preprocessor.py.orig 2015-10-30 05:39:18.073914726 -0700
+++ innodb_preprocessor.py  2015-10-30 07:37:29.439914287 -0700
@@ -39,19 +39,23 @@
         chunks = {'junk': []}
         current_chunk = 'junk'
         next_chunk = False
+        oldest_view = False

         self.clear_variables()
         for row in rows:
             innoblob = row[2].replace(',', '').replace(';', '').replace('/s', '').split('\n')
             for line in innoblob:
                 # All chunks start with more than three dashes. Only the individual innodb bufferpools have three dashes
+                if line.startswith('---OLDEST VIEW---'):
+                    oldest_view = True
                 if line.startswith('----'):
                     # First time we see more than four dashes have to record the new chunk
-                    if next_chunk == False:
+                    if next_chunk == False and oldest_view == False:
                         next_chunk = True
                     else:
                     # Second time we see them we just have recorded the chunk
                         next_chunk = False
+                        oldest_view = False
                 elif next_chunk == True:
                     # Record the chunkname and initialize the array
                     current_chunk = line

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.