Giter VIP home page Giter VIP logo

terraform-google-modules / terraform-google-slo Goto Github PK

View Code? Open in Web Editor NEW
63.0 19.0 28.0 1.61 MB

Creates SLOs on Google Cloud from custom Stackdriver metrics capability to export SLOs to Google Cloud services and other systems

Home Page: https://registry.terraform.io/modules/terraform-google-modules/slo/google

License: Apache License 2.0

Ruby 4.87% Makefile 8.99% HCL 86.15%
cft-terraform operations

terraform-google-slo's Introduction

terraform-google-slo

Native SLOs (Service Monitoring API)

The slo-native submodule deploys SLOs to Google Cloud Service Monitoring API. It uses the native Terraform resource google_monitoring_slo to achieve this.

Use this module if:

  • Your SLOs will use metrics from Cloud Monitoring backend only, not other backends.

  • You want to use the Service Monitoring UI to monitor your SLOs.

Usage

Deploy your SLO directly to the Service Monitoring API:

HCL format

module "slo_basic" {
  source = "terraform-google-modules/slo/google//modules/slo-native"
  config = {
    project_id        = var.app_engine_project_id
    service           = data.google_monitoring_app_engine_service.default.service_id
    slo_id            = "gae-slo"
    display_name      = "90% of App Engine default service HTTP response latencies < 500ms over a day"
    goal              = 0.9
    calendar_period   = "DAY"
    type              = "basic_sli"
    method            = "latency"
    latency_threshold = "0.5s"
  }
}

See examples/native/simple_example for another example.

YAML format

You can also write an SLO in a YAML definition file and load it into the module:

locals {
  config = yamldecode(file("configs/my_slo_config.yaml"))
}

module "slo_basic" {
  source = "terraform-google-modules/slo/google//modules/slo-native"
  config = local.config
}

A standard SRE practice is to write SLO definitions as YAML files, and follow DRY principles. See examples/slo-generator/yaml_example for an example of how to write re-usable YAML templates loaded into Terraform.

SLO generator (any monitoring backend)

The slo-generator module deploys the slo-generator in Cloud Run in order to compute and export SLOs on a schedule.

SLO Configurations are pushed to Google Cloud Storage, and schedules are maintained using Google Cloud Schedulers.

Use this setup if:

  • You are using other metrics backends than Cloud Monitoring that you want to create SLOs out of (e.g: Elastic, Datadog, Prometheus, ...).

  • You want to have custom reporting and exporting for your SLO data, along with historical data (e.g: BigQuery, DataStudio, custom metrics).

  • You want to have a common configuration format for all SLOs (see documentation).

Architecture

Architecture

Compatibility

This module is meant for use with Terraform 0.13+ and tested using Terraform 1.0+. If you find incompatibilities using Terraform >=0.13, please open an issue. If you haven't upgraded and need a Terraform 0.12.x-compatible version of this module, the last released version intended for Terraform 0.12.x is v1.0.2.

Usage

Deploy the SLO generator service with some SLO configs:

locals {
  config = yamldecode(file("configs/config.yaml"))
  slo_configs = [
    for cfg in fileset(path.module, "/configs/slo_*.yaml") :
    yamldecode(file(cfg))
  ]
}

module "slo-generator" {
  source = "terraform-google-modules/slo/google//modules/slo-generator"
  project_id  = "<PROJECT_ID>"
  config      = local.config
  slo_configs = local.slo_configs
}

For information on the config formats, please refer to the slo-generator documentation:

See examples/slo-generator/ for more examples with different setups.

Contributing

Refer to the contribution guidelines for information on contributing to this module.

terraform-google-slo's People

Contributors

andrewmackett avatar apeabody avatar bharathkkb avatar bkamin29 avatar cloud-foundation-bot avatar cuong-ts avatar djabx avatar dogmatic69 avatar morgante avatar naterd avatar release-please[bot] avatar renovate[bot] avatar taylorludwig avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-google-slo's Issues

slo-native output to allow creation of alert policy

At the moment the slo-native module only outputs the Project ID. It would be really useful if the module could output the SLO name or ID. Having the SLO name or ID as an output would allow it to be used in defining the filter when creating an alerting policy for the SLO as documented here.

I've opened PR #100 to add a name output.

Updating SLO config does not redeploy Cloud Function

In order for this module to hold config-as-code for SLOs, we need to be able to quickly modify an SLO, and redeploy the Cloud Function with the modifications to the SLO config.

Currently, this is blocked by an issue in terraform-google-event-function, itself being an upstream issue in the provider here.

Error when using SLO with Backend Prometheus

I get an error when using an SLO with backend class "Prometheus".

---
service_name:      product
feature_name:      search
slo_name:          freshness
slo_description:   95% of messages in queue are fresh
slo_target:        0.95
backend:
  class:           Prometheus
  method:          good_bad_ratio
  url:             ${prometheus_url}
  measurement:
    filter_good:  <query-1>
    filter_valid:  <query-2>
exporters:
- class:           Stackdriver
  project_id:      ${stackdriver_host_project_id}

Error when running TF (missing field changes with each run):

Error: Invalid value for module argument

  on main.tf line 130, in module "product_search_freshness":
 130:   config                     = local.slo_config_map["product-search-freshness"]

The given value is not suitable for child module variable "config" defined at
.terraform/modules/product_search_freshness/modules/slo/variables.tf:41,1-18:
attribute "slo_target": number required.

Somehow looks like there is an issue with the YAML? When i output the configmap in TF, i can see the SLO with all it's fields from the file.

output "product_search_freshness" {
  value = local.slo_config_map["product-search-freshness"]
}

TF output:

product_search_freshness = {
  "backend" = {
    "class" = "Prometheus"
    "measurement" = {
      "filter_good" = "query-1"
      "filter_valid" = "query-2"
    }
    "method" = "good_bad_ratio"
    "url" = "https://prometheus.com/"
  }
  "exporters" = [
    {
      "class" = "Pubsub"
      "project_id" = "my-project-id"
      "topic_name" = "slo-export-topic"
    },
  ]
  "feature_name" = "search"
  "service_name" = "product"
  "slo_description" = "95% of messages in queue are fresh"
  "slo_name" = "freshness"
  "slo_target" = 0.95
}

Any idea what can be wrong here?

Thx Sven

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

Detected dependencies

regex
Makefile
  • cft/developer-tools 1.0
build/int.cloudbuild.yaml
  • cft/developer-tools 1.0
build/lint.cloudbuild.yaml
  • cft/developer-tools 1.0
terraform
examples/native/simple_example/main.tf
  • terraform-google-modules/slo/google ~> 3.0
  • terraform-google-modules/slo/google ~> 3.0
  • terraform-google-modules/slo/google ~> 3.0
  • terraform-google-modules/slo/google ~> 3.0
  • terraform-google-modules/slo/google ~> 3.0
  • terraform-google-modules/slo/google ~> 3.0
examples/native/yaml_example/main.tf
  • terraform-google-modules/slo/google ~> 3.0
  • terraform-google-modules/slo/google ~> 3.0
  • terraform-google-modules/slo/google ~> 3.0
  • terraform-google-modules/slo/google ~> 3.0
  • terraform-google-modules/slo/google ~> 3.0
  • terraform-google-modules/slo/google ~> 3.0
examples/slo-generator/simple_example/main.tf
  • terraform-google-modules/slo/google ~> 3.0
examples/slo-generator/sre_export_cloudevent_eventarc/main.tf
  • terraform-google-modules/slo/google ~> 3.0
  • terraform-google-modules/slo/google ~> 3.0
examples/slo-generator/sre_export_cloudevent_exporter/main.tf
  • terraform-google-modules/slo/google ~> 3.0
  • terraform-google-modules/slo/google ~> 3.0
examples/slo-generator/sre_service_advanced/sre.tf
  • terraform-google-modules/slo/google ~> 3.0
examples/slo-generator/sre_service_advanced/teams.tf
  • terraform-google-modules/slo/google ~> 3.0
  • terraform-google-modules/slo/google ~> 3.0
examples/slo-generator/sre_shared_schedulers/main.tf
  • terraform-google-modules/slo/google ~> 3.0
examples/slo-generator/sre_shared_schedulers_pubsub/main.tf
  • terraform-google-modules/slo/google ~> 3.0
modules/slo-generator/versions.tf
  • google >= 5.0, < 6
  • hashicorp/terraform >= 0.13.0
modules/slo-native/versions.tf
  • google >= 5.0, < 6
  • hashicorp/terraform >= 0.13.0
test/fixtures/simple_example/main.tf
test/fixtures/simple_example/versions.tf
  • hashicorp/terraform >= 0.12
test/setup/main.tf
  • terraform-google-modules/project-factory/google ~> 14.0
test/setup/versions.tf
  • hashicorp/terraform >= 0.13

  • Check this box to trigger a request for Renovate to run again on this repository

Can we have a json template feature just like we have for yaml template.

TL;DR

The json template is readily available to download from the console. The json template is natively supported and can be a feature in this module.

Terraform Resources

No response

Detailed design

A sample json for cloud function execution:

{
"displayName": "90% - Distribution Cut - Rolling 7 days",
"goal": 0.9,
"rollingPeriod": "604800s",
"serviceLevelIndicator": {
"requestBased": {
"distributionCut": {
"distributionFilter": "metric.type="cloudfunctions.googleapis.com/function/execution_times" resource.type="cloud_function"",
"range": {
"min": null,
"max": 10
}
}
}
}
}

Additional information

No response

slo-generator image not accessible

TL;DR

Google Cloud Run Service Agent service-{}@serverless-robot-prod.iam.gserviceaccount.com must have permission to read the image, gcr.io/slo-generator-ci-a2b4/slo-generator:latest.

Expected behavior

No response

Observed behavior

No response

Terraform Configuration

CI

Terraform Version

CI

Additional information

No response

New `metrics` block from slo-generator v1.3.0 makes TF 0.13 module fail

Terraform struggles with complex nested variables validation. For instance, it cannot take a list of maps with arbitrary types for the map values: all values need to be either all required, or all of the same type. There is no concept of optional variables either.

See here and here for more context on those issues.

In order to avoid getting the issue with any addition in the SLO Config or Error Budget Policy, we should pass the SLO config file paths (with eventual variable replacement happening inside the module) instead of SLO config file contents, to bypass the variable validation (which slo-generator will take care of anyway).

BigQuery Table Not Created

Hi,

We enabled the exporter to BigQuery (before we only exported to Stackdriver) and applied the TF resource in our GCP project. The TF apply command was successful but after looking into GCP, we noticed the the SLO export to BigQuery fails (cloud function ends in crashed state).

  • exporters.yaml
pipeline:
  - class: Bigquery
    project_id: ${project_id}
    dataset_id: slo_reports
    table_name: report

  - class: Stackdriver
    project_id: ${stackdriver_host_project_id}

slo:
  - class: Pubsub
    project_id: ${project_id}
    topic_name: ${pubsub_topic_name}
  • slo-pipeline cloud function logs
Traceback (most recent call last): File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker_v1.py", line 402, in run_background_function _function_handler.invoke_user_function(event_object) File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker_v1.py", line 222, in invoke_user_function return call_user_function(request_or_event) File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker_v1.py", line 219, in call_user_function event_context.Context(**request_or_event.context)) File "/user_code/main.py", line 25, in main compute.export(data, exporters) File "/env/local/lib/python3.7/site-packages/slo_generator/compute.py", line 97, in export ret = exporter.export(data, **config) File "/env/local/lib/python3.7/site-packages/slo_generator/exporters/bigquery.py", line 53, in export table_id = config['table_id'] KeyError: 'table_id'

When looking into BigQuery, we can see that the dataset with id "slo_reports" was created but it doesn't contain any table (e.g. "report" as defined in the exporters.yaml).

Any idea why we have this issue?

Thx,
Sven

An error occurs when i try to use a custom service account on slo sub-module

Hello,
If i try to use a custom service account on the SLO sub-module, i get this error :

Error: Invalid count argument

  on .terraform/modules/slo_generator.slo-hp-fr-quality/modules/slo/iam.tf line 18, in resource "google_service_account" "main":
  18:   count        = var.service_account_email != "" ? 0 : 1

The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.

My Terraform déclaration is :

module "slo-hp-fr-quality" {
  #source                = "github.com/terraform-google-modules/terraform-google-slo//modules/slo?ref=d09d7550aef2748000603b5e52551bbaab59f4d2"
  source                = "github.com/terraform-google-modules/terraform-google-slo//modules/slo?ref=b9862d0a22bb7e2a6adaf3e8b4b6173b08d995ea"
  region                = var.region
  project_id            = var.project_id
  service_account_email = "${google_service_account.slo_computations.email}"
  error_budget_policy   = var.error_budget_policy
  config = {
    slo_name        = "quality"
    slo_target      = 0.999
    slo_description = "quality on nfs fr home page"
    service_name    = "nfs"
    feature_name    = "fr-hp"
    exporters = [{
      class      = "Pubsub"
      project_id = var.project_id
      topic_name = module.slo-pipeline.pubsub_topic_name
    }]
    backend = {
      class      = "Stackdriver"
      project_id = var.project_id
      method     = "good_bad_ratio"
      measurement = {
        "filter_good" = "project=\"${var.project_id}\" AND metric.type=\"logging.googleapis.com/user/${google_logging_metric.quality_count.name}\" AND metric.labels.country=\"fr\" AND metric.labels.status_code = 1"
        "filter_bad"  = "project=\"${var.project_id}\" AND metric.type=\"logging.googleapis.com/user/${google_logging_metric.quality_count.name}\" AND metric.labels.country=\"fr\" AND metric.labels.status_code = 0"
      }
    }
  }
}

Bug with latest version

Error: Error waiting for Creating CloudFunctions Function: Error code 3, message: Function failed on loading user code. Error message: Code in file main.py can't be loaded.
Did you list all required modules in requirements.txt?
Detailed stack trace:
Traceback (most recent call last):
  File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker_v1.py", line 315, in check_or_load_user_function
    _function_handler.load_user_function()
  File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker_v1.py", line 190, in load_user_function
    spec.loader.exec_module(main_module)
  File "<frozen importlib._bootstrap_external>", line 728, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "/user_code/main.py", line 21, in <module>
    from slo_generator import compute
ModuleNotFoundError: No module named 'slo_generator'

To fix, upgrade your slo_generator_version to 1.1.2.

A python error is occurs on the slo-pipeline cloud function

Hello,

We obtain regularly errors on the slo-pipeline cloud function.

For example, on February 25, 2020 between 7:45 p.m. and 8 p.m., there were no successful executions for the cloud functions slo-pipeline :

image

When i check logs :

image

In the logs i regularly see this type of error :

Traceback (most recent call last): File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py", line 383, in run_background_function _function_handler.invoke_user_function(event_object) File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py", line 217, in invoke_user_function return call_user_function(request_or_event) File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py", line 214, in call_user_function event_context.Context(**request_or_event.context)) File "/user_code/main.py", line 25, in main compute.export(data, exporters) File "/env/local/lib/python3.7/site-packages/slo_generator/compute.py", line 94, in export ret = exporter.export(data, **config) File "/env/local/lib/python3.7/site-packages/slo_generator/exporters/stackdriver.py", line 47, in export self.create_timeseries(data, **config) File "/env/local/lib/python3.7/site-packages/slo_generator/exporters/stackdriver.py", line 64, in create_timeseries data['error_budget_policy_step_name']) TypeError: 'NoneType' object is not subscriptable

or this other error :

Traceback (most recent call last): File "/env/local/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 57, in error_remapped_callable return callable_(*args, **kwargs) File "/env/local/lib/python3.7/site-packages/grpc/_channel.py", line 824, in __call__ return _end_unary_response_blocking(state, call, False, None) File "/env/local/lib/python3.7/site-packages/grpc/_channel.py", line 726, in _end_unary_response_blocking raise _InactiveRpcError(state) grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: status = StatusCode.INVALID_ARGUMENT details = "One or more TimeSeries could not be written: Points must be written in order. One or more of the points specified had an older end time than the most recent point.: timeSeries[0]" debug_error_string = "{"created":"@1582657332.973434028","description":"Error received from peer ipv4:173.194.76.95:443","file":"src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"One or more TimeSeries could not be written: Points must be written in order. One or more of the points specified had an older end time than the most recent point.: timeSeries[0]","grpc_status":3}" > The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py", line 383, in run_background_function _function_handler.invoke_user_function(event_object) File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py", line 217, in invoke_user_function return call_user_function(request_or_event) File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py", line 214, in call_user_function event_context.Context(**request_or_event.context)) File "/user_code/main.py", line 25, in main compute.export(data, exporters) File "/env/local/lib/python3.7/site-packages/slo_generator/compute.py", line 94, in export ret = exporter.export(data, **config) File "/env/local/lib/python3.7/site-packages/slo_generator/exporters/stackdriver.py", line 47, in export self.create_timeseries(data, **config) File "/env/local/lib/python3.7/site-packages/slo_generator/exporters/stackdriver.py", line 90, in create_timeseries result = self.client.create_time_series(project, [series]) File "/env/local/lib/python3.7/site-packages/google/cloud/monitoring_v3/gapic/metric_service_client.py", line 1039, in create_time_series request, retry=retry, timeout=timeout, metadata=metadata File "/env/local/lib/python3.7/site-packages/google/api_core/gapic_v1/method.py", line 143, in __call__ return wrapped_func(*args, **kwargs) File "/env/local/lib/python3.7/site-packages/google/api_core/retry.py", line 286, in retry_wrapped_func on_error=on_error, File "/env/local/lib/python3.7/site-packages/google/api_core/retry.py", line 184, in retry_target return target() File "/env/local/lib/python3.7/site-packages/google/api_core/timeout.py", line 214, in func_with_timeout return func(*args, **kwargs) File "/env/local/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 59, in error_remapped_callable six.raise_from(exceptions.from_grpc_error(exc), exc) File "<string>", line 3, in raise_from google.api_core.exceptions.InvalidArgument: 400 One or more TimeSeries could not be written: Points must be written in order. One or more of the points specified had an older end time than the most recent point.: timeSeries[0]

Top errors :
image

When regarding the executions list, we can see many bad executions :

image

I'm not sure that the no execution on February 25, 2020 is linked to the regular bad executions. Maybe a service outage of Cloud Function service ?

image

image

Terraform wants to change resources when "local files" are recreated

Terraform wants to change resources when "local files" (generated from templates in /templates) are recreated (e.g. when TF is run on another environment which has no local files).

How to reproduce:

  • Make sure there are no changes and infrastructure is up-to-date
  • Clean local folder .terraform/
  • Run Terraform "plan" step

-> TF will show changes for e.g. for resources like local_file (create), google_cloudfunctions_function (updated in-place), google_storage_bucket_object (replaced).

Not sure yet why, but seems like a new archive file is created (different name) event though the file content didn't change.

(Could be that the MD5 checksum is different and therefore the archive file name changes: https://github.com/terraform-google-modules/terraform-google-slo/blob/master/modules/slo/main.tf#L70)

SLO Pipeline ServiceAccount Misses Storage Permissions

After upgrading to module version v1.0.0, the CF slo-pipeline crashes with the following stracktrace:

Short

google.api_core.exceptions.Forbidden: 403 GET https://storage.googleapis.com/storage/v1/b/slo-pipeline-6d9e0713ba72?projection=noAcl&prettyPrint=false: [email protected] does not have storage.buckets.get access to the Google Cloud Storage bucket. 

Full

slo-pipeline2om9asvy7k0r Traceback (most recent call last): File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker_v2.py", line 449, in run_background_function _function_handler.invoke_user_function(event_object) File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker_v2.py", line 268, in invoke_user_function return call_user_function(request_or_event) File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker_v2.py", line 265, in call_user_function event_context.Context(**request_or_event.context)) File "/user_code/main.py", line 23, in main exporters = download_gcs("gs://slo-pipeline-6d9e0713ba72/config/exporters.json") File "/user_code/main.py", line 53, in download_gcs bucket = storage_client.get_bucket(bucket) File "/env/local/lib/python3.7/site-packages/google/cloud/storage/client.py", line 361, in get_bucket if_metageneration_not_match=if_metageneration_not_match, File "/env/local/lib/python3.7/site-packages/google/cloud/storage/bucket.py", line 936, in reload if_metageneration_not_match=if_metageneration_not_match, File "/env/local/lib/python3.7/site-packages/google/cloud/storage/_helpers.py", line 210, in reload retry=DEFAULT_RETRY, File "/env/local/lib/python3.7/site-packages/google/cloud/storage/_http.py", line 63, in api_request return call() File "/env/local/lib/python3.7/site-packages/google/api_core/retry.py", line 286, in retry_wrapped_func on_error=on_error, File "/env/local/lib/python3.7/site-packages/google/api_core/retry.py", line 184, in retry_target return target() File "/env/local/lib/python3.7/site-packages/google/cloud/_http.py", line 435, in api_request raise exceptions.from_http_response(response) google.api_core.exceptions.Forbidden: 403 GET https://storage.googleapis.com/storage/v1/b/slo-pipeline-6d9e0713ba72?projection=noAcl&prettyPrint=false: [email protected] does not have storage.buckets.get access to the Google Cloud Storage bucket. 

-- Sven

feat: slo burn rate alerting policies

it would be great if the module could accept pairs of lookback period + burnrate threshold, and generate alerting policies, with an example/default of setting up the recommended starting point fast (60 minutes + 10x) + slow (24h + 2x) burn rate alerts.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.