Giter VIP home page Giter VIP logo

Comments (46)

alexchumak avatar alexchumak commented on June 15, 2024 14

+1 πŸ‘

Pretty much a deal-breaker feature that's missing at the moment.

from amplify-category-api.

joekendal avatar joekendal commented on June 15, 2024 10

I don't think it's good practice to use stage variables as part of your table naming convention. Rather, it is recommended to use separate AWS accounts for all stages with AWS Organizations.

from amplify-category-api.

wai-chuen avatar wai-chuen commented on June 15, 2024 9

Hey, Any news with that? Does someone has been successful putting the APPSYNC_ID and the ENV in the VTL in params?

Much Thanks!

@jonperryxlm came up with a fix in aws-amplify/amplify-cli#1946 where you can specify an additional function that feeds the api id and env into the stash which worked for me. The relevant bits πŸ‘‡ - it requires setting up a Pipeline resolver though.

In CustomResources.json

"addEnvVariablesToStash": {
  "Type": "AWS::AppSync::FunctionConfiguration",
  "Properties": {
    "ApiId": {
      "Ref": "AppSyncApiId"
    },
    "DataSourceName": "NONE",
    "Description": "Sets $ctx.stash.env to the Amplify environment and $ctx.stash.apiId to the Amplify API ID",
    "FunctionVersion": "2018-05-29",
    "Name": "addEnvVariablesToStash",
    "RequestMappingTemplate": "{\n          \"version\": \"2017-02-28\",\n          \"payload\": {}\n        }",
    "ResponseMappingTemplate": {
      "Fn::Join": [
        "",
        [
          "$util.qr($context.stash.put(\"env\", \"",
          { "Ref" : "env" },
          "\"))\n$util.qr($context.stash.put(\"apiId\", \"",
	  { "Ref": "AppSyncApiId" },
	  "\"))\n$util.toJson($ctx.prev.result)"
        ]
      ]
    }
  }
},

And then in your resolver that needs it

{                                                                                                                 
  "version" : "2018-05-29",
  "operation" : "BatchGetItem",
  "tables" : {
    "MyTablename-${ctx.stash.apiId}-${ctx.stash.env}": {
      "keys": $util.toJson($ids),
      "consistentRead": true
    }
  }
}

from amplify-category-api.

robboerman2 avatar robboerman2 commented on June 15, 2024 6

yes already supported, trough substitutions

from amplify-category-api.

robboerman2 avatar robboerman2 commented on June 15, 2024 5

workaround of a pipeline where you use a lambda to import the environment variables into the stash will impact performance in bigger appsync apis.

with batchGetItem you really see that appsync is still in it's beginning stages.

really hope that aws will add environment variables for mapping resolvers.
adding the option for getting the type name and field name in the resolver would also be great.

and remove the BS option that you need to set the database name again, even with a datasource with that database added, that is just a straight up bug in aws appsync.

from amplify-category-api.

maroy1986 avatar maroy1986 commented on June 15, 2024 5

I'm also vouching for that. Having to manage these table names in resolver is a real pain in the butt. People don't think about it and push the resolvers code to git with the table names changed all the time.

There should be an easy way to get the table names in the given environment, that would be a life saver.

And don't tell me, well just ask people not to push these modified files, you know people, they forgot as soon as you let them go. That's human nature's...

from amplify-category-api.

thejasonfisher avatar thejasonfisher commented on June 15, 2024 5

This solution is ridiculous. We still don't have support for passing these variables directly to a resolver?

from amplify-category-api.

onlybakam avatar onlybakam commented on June 15, 2024 5

AppSync just released support for Environment Variables: https://docs.aws.amazon.com/appsync/latest/devguide/environmental-variables.html

The feature is live and the Cloudformation docs will be updates shortly.

from amplify-category-api.

apoorvmote avatar apoorvmote commented on June 15, 2024 4

I also have same issue. On top of that how do I add table name when I am testing api locally?

from amplify-category-api.

cianclarke avatar cianclarke commented on June 15, 2024 3

πŸ‘ Also having issues with Batch*Item operations - table name differs per environment.
The resolver pipeline has an association with the DataSource. Why is table name inherent for Query, PutItem, GetItem, but needed for BatchItem operations?
I'd rather not shell out to a lambda to evaluate.

from amplify-category-api.

SwaySway avatar SwaySway commented on June 15, 2024 3

Adding an update here that as of now AppSync does not support adding environment variables into resolver functions. We are looking at other ways we can address this. We also welcome any PRs or discussions on potential solutions on this.

from amplify-category-api.

Dizzzmas avatar Dizzzmas commented on June 15, 2024 3

+1 for this feature

Recently was using Amplify+AppSync and had to create custom resolvers for BatchPutItem. Works well when deployed, but because of the table name being different when I develop locally with amplify mock api the resolver becomes essentially useless.

from amplify-category-api.

HeskethGD avatar HeskethGD commented on June 15, 2024 3

In my case I'm building an AppSync JS Function connected to a Http data source which in this case is a Step Function. We're using CDK and we have Sandbox, Dev, and Prod accounts. We need to pass a different Step Function ARN to the JS code for each one and currently we have to inline the code as a string to inject a different variable. It looks messy, it's error prone and not easily tested. It would be good to be able to pass env variables to the JS function like we do with Lambda functions.

from amplify-category-api.

mattiLeBlanc avatar mattiLeBlanc commented on June 15, 2024 2

Hi Rob,

Well it is a bit hard to give you our full CDK implementation because we haven't open sourced it (yet). Still in development.

But the bit where we doing the injection is where we define the template for a resolver:

/**
   * Add a Resolver to the API
   */
  public addResolver(config: ResolverConfig) {

    const options: any = {
      apiId: this.api.attrApiId,
      typeName: config.type,
      fieldName: config.name,
      requestMappingTemplate: this.addEnv(ResolverService.Instance.resolvers[ config.type ][ `${config.template}-req` ]),
      responseMappingTemplate: ResolverService.Instance.resolvers[ config.type ][ `${config.template}-res` ]
    };

    if (config.kind === ResolverKind.PIPELINE && config.pipelineFunctions && Array.isArray(config.pipelineFunctions)) {
      options.kind = ResolverKind.PIPELINE;
      options.pipelineConfig = {
        functions: []
      };
      config.pipelineFunctions.forEach(name => {
        options.pipelineConfig.functions.push(this.pipelineFunctions[ `${name}` ].attrFunctionId);
      });
    } else {
      options.dataSourceName = config.dataSourceName;
    }
    return new CfnResolver(this, `Resolver_${config.name}`, options);
  }

The important bit here is the this.addEnv which is used at the requestMappingTemplate property.
This function is nothing more then an concat function:

  protected addEnv(template: string) {
    return `#set($env=${JSON.stringify(this.resolverEnvironment)})\n${template}`;
  }

The resolverEnvironment is a property of an AppSync Construct (class) that creates a AWS Resource using the Constructs (check the CDK example for Typescript).

So when you deploy your API for an environment (local, dev or staging etc..) it will automatically inject the $env variable in your template.

from amplify-category-api.

majirosstefan avatar majirosstefan commented on June 15, 2024 2

@jonperryxlm Thanks for the reply and suggestions (it helped).

I figured it out just a few minutes ago (I also needed to re-deploy API, because seems like Appsync vs local stack got out of sync and it was throwing quite strange errors during deployment).

I am currently writing that missing blog post so I would not forget it as my brain works similarly to yours (I mean trauma shielding thing).

I will post the link in this comment later.

Link: https://stefan-majiros.com/blog/custom-graphql-batchgetitem-resolver-in-aws-amplify-for-appsync/

from amplify-category-api.

neal-k avatar neal-k commented on June 15, 2024 2

Any updates on this? This is serious limitation. The whole point of amplify is to simplify app-development. I have a custom resolver for TransactWriteItems that I'll have to maintain manually until there's a fix. This issue opened over 3 years ago. Please provide a remedy 😞

from amplify-category-api.

robboerman2 avatar robboerman2 commented on June 15, 2024 1

@mattiLeBlanc ok, waiting patiently on your response :)

from amplify-category-api.

PatrykMilewski avatar PatrykMilewski commented on June 15, 2024 1

It would be nice to have this feature

from amplify-category-api.

WiL-dev avatar WiL-dev commented on June 15, 2024 1

A nice to have feature X2, meanwhile, the other workaround is to create a lambda function that makes the batch operation and call it from the API

from amplify-category-api.

maziarzamani avatar maziarzamani commented on June 15, 2024 1

Without this feature it makes it immensely complicated and not scalable to build batch operations or any other custom resolvers in AppSync. The tables need to be hardcoded which is an absolute no-go.

from amplify-category-api.

jonperryxlm avatar jonperryxlm commented on June 15, 2024 1

@majirosstefan I haven't done anything with this for over a year, so I'm surprised (and a little annoyed) that this is still an issue people are struggling with, with no help from the AWS Amplify team...

To use the solution from aws-amplify/amplify-cli#1946, the first thing I would check is that you actually have a data source of type "NONE" in the AppSync UI (AWS AppSync > [YOUR API] > Data Sources). You can call it anything, but the type has to be NONE and the name you choose is the value to reference in the "DataSourceName" field of the "AddEnvVariablesToStash" object. I just so happened to call my data source of type NONE... NONE.

From memory (and I apologise if I'm remembering incorrectly), I think you need to create the NONE data source in the AWS AppSync UI (AWS AppSync > [YOUR API] > Data Sources > Create data source). You might be able to do it programmatically, but I have a feeling that at the time I read something about creating it in the UI.

I hope that helps. I know how frustrating this issue is. It's probably all I can do to help unfortunately because it's been so long and my brain is shielding me from the trauma.

from amplify-category-api.

SwaySway avatar SwaySway commented on June 15, 2024

@mattiLeBlanc Are you referring to a lambda environment variable or a parameter?
Currently in AppSync you can pass the table name as a field in the schema otherwise you'll need to specify the table name in the request mapping template. This seems similar to this following feature request #439 . I'll review this issue with the team as well.

from amplify-category-api.

mattiLeBlanc avatar mattiLeBlanc commented on June 15, 2024

My Function resolver (Appsync Pipeline) uses a BatchPutItem:

#set($postsdata = [])
#foreach($id in ${ctx.args.groups})
    #set($item = {
        "pk": $id,
        "sk": "POST:POST_ID=$ctx.stash.postId",
        "type": "POST_IN_GROUP",
        "title": $ctx.args.title
    })
    $util.qr($postsdata.add($util.dynamodb.toMapValues($item)))
#end

{
    "version" : "2018-05-29",
    "operation" : "BatchPutItem",
    "tables" : {
        "coralconsole_$ctx.stash.env": $utils.toJson($postsdata)
    }
}

and as you can see I am using a stashed env variable at the table name in an attempt to make this work.

However, in Cloudformation we already have an ENV variable available so it might be possible to expose that into the $ctx object so that we don't have to a call a Lambda function in a pipeline to be able to specify the unique environment table name.

from amplify-category-api.

mattiLeBlanc avatar mattiLeBlanc commented on June 15, 2024

One way we resolved this is by using the AWS CDK to provision our Cloud Resources.
When we build our DIST, we read the templates of the resolvers and inject the database name.
Then that is deployed.
Works pretty well.

from amplify-category-api.

robboerman2 avatar robboerman2 commented on June 15, 2024

@mattiLeBlanc hmm that could be a good workaround. could you share some of that code that you made with AWS CDK to accomplish that?

from amplify-category-api.

mattiLeBlanc avatar mattiLeBlanc commented on June 15, 2024

from amplify-category-api.

robboerman2 avatar robboerman2 commented on June 15, 2024

@mattiLeBlanc thanks for the example, this will help.

from amplify-category-api.

mattiLeBlanc avatar mattiLeBlanc commented on June 15, 2024

@mattiLeBlanc thanks for the example, this will help.

I hope it does. We found implementing the CDK pretty cumbersome at the start, especially with a bigger project with 3 stacks and one root stack. But I hope you will figure it out. Otherwise, just ask me in this thread.

from amplify-category-api.

alimeerutech avatar alimeerutech commented on June 15, 2024

@mattiLeBlanc I was unable to find resolverEnvironment anywhere in https://github.com/aws-samples/aws-cdk-examples, or any of the docs, are you referring to another git repo/cdk constructs example code?

from amplify-category-api.

mattiLeBlanc avatar mattiLeBlanc commented on June 15, 2024

@mattiLeBlanc I was unable to find resolverEnvironment anywhere in https://github.com/aws-samples/aws-cdk-examples, or any of the docs, are you referring to another git repo/cdk constructs example code?

Sorry for the late reply:

resolverEnvironment is something we added to our own Stack, so it is not a standard property you would find like region or account.
We get our environment from process.env.ENV and we set it in Bitbucket (deploy variables) or in our local terminal env variables.

Does that make sense?

from amplify-category-api.

beerth avatar beerth commented on June 15, 2024

+1 for that feature.
Any news on that?

Thx and all the best!

from amplify-category-api.

beerth avatar beerth commented on June 15, 2024

yes already supported, trough substitutions

May you please provide some more details - also an example would be great. Really appreciate your support!

from amplify-category-api.

idobleicher avatar idobleicher commented on June 15, 2024

Hey, Any news with that? Does someone has been successful putting the APPSYNC_ID and the ENV in the VTL in params?

Much Thanks!

from amplify-category-api.

kldeb avatar kldeb commented on June 15, 2024

My very easy and very unsophisticated workaround is to create multiple fields for each env with hard-coded values.

from amplify-category-api.

eciuca avatar eciuca commented on June 15, 2024

Thanks for the workaround @wai-chuen. I want to add a +1 for this functionality. We also have tables with the ApiID and env name in their name ...
LATER EDIT: $context need to be replaced with $ctx
@aws As someone said before for Batch*Item operations you need the table name. This means that in order to reuse a vtl template for BatchDeleteItem for example you need to have the Table name available in the $ctx.stash or somewhere. Currently I am creating separate Pipeline resolvers for each entity type and separate templates, but If I would've had the table name available in the context I could use only one template.

from amplify-category-api.

majirosstefan avatar majirosstefan commented on June 15, 2024

I put this into the Resources object in stacks/CustomResources.json as I wanted to use Pipeline resolver:

"Resources": {
    "EmptyResource": {
      "Type": "Custom::EmptyResource",
      "Condition": "AlwaysFalse"
    },
    "AddEnvVariablesToStash": {
      "Type": "AWS::AppSync::FunctionConfiguration",
      "Properties": {
        "ApiId": {
          "Ref": "AppSyncApiId"
        },
        "DataSourceName": "NONE",
        "Description": "Sets $ctx.stash.env to the Amplify environment and $ctx.stash.apiId to the Amplify API ID",
        "FunctionVersion": "2018-05-29",
        "Name": "AddEnvVariablesToStash",
        "RequestMappingTemplate": "{\n          \"version\": \"2017-02-28\",\n          \"payload\": {}\n        }",
        "ResponseMappingTemplate": {
          "Fn::Join": [
            "",
            [
              "$util.qr($ctx.stash.put(\"env\", \"",
              {
                "Ref": "env"
              },
              "\"))\n$util.qr($ctx.stash.put(\"apiId\", \"",
              {
                "Ref": "AppSyncApiId"
              },
              "\"))\n$util.toJson($ctx.prev.result)"
            ]
          ]
        }
      }
    },
    "FunctionQueryBatchFetchTodo": {
      "Type": "AWS::AppSync::FunctionConfiguration",
      "Properties": {
        "ApiId": {
          "Ref": "AppSyncApiId"
        },
        "DataSourceName": "TodoTable",
        "FunctionVersion": "2018-05-29",
        "Name": "FunctionQueryBatchFetchTodo",
        "RequestMappingTemplateS3Location": {
          "Fn::Sub": [
            "s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/Query.batchFetchTodo.req.vtl",
            {
              "S3DeploymentBucket": {
                "Ref": "S3DeploymentBucket"
              },
              "S3DeploymentRootKey": {
                "Ref": "S3DeploymentRootKey"
              }
            }
          ]
        },
        "ResponseMappingTemplateS3Location": {
          "Fn::Sub": [
            "s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/Query.batchFetchTodo.res.vtl",
            {
              "S3DeploymentBucket": {
                "Ref": "S3DeploymentBucket"
              },
              "S3DeploymentRootKey": {
                "Ref": "S3DeploymentRootKey"
              }
            }
          ]
        }
      }
    },
    "PipelineQueryBatchResolver": {
      "Type": "AWS::AppSync::Resolver",
      "Properties": {
        "ApiId": {
          "Ref": "AppSyncApiId"
        },
        "Kind": "PIPELINE",
        "PipelineConfig": {
          "Functions": [
            {
              "Fn::GetAtt": [
                "AddEnvVariablesToStash",
                "FunctionId"
              ]
            },
            {
              "Fn::GetAtt": [
                "FunctionQueryBatchFetchTodo",
                "FunctionId"
              ]
            }
          ],
          "TypeName": "Query",
          "FieldName": "batchFetchTodo",
          "RequestMappingTemplate": "{}",
          "ResponseMappingTemplate": "$util.toJson($ctx.result)"
        }
      }
    },
    "NONE": {
      "Type": "AWS::AppSync::DataSource",
      "Properties": {
        "ApiId": {
          "Ref": "AppSyncApiId"
        },
        "Name": "NONE",
        "Type": "NONE"
      }
    }
  },

I am using "aws-amplify": "^4.3.12" and "aws-amplify-react-native": "^6.0.2". Running amplify -version in terminal prints 5.1.0.

I am still getting these errors:
​
​No matter if I remove 'NONE' definition or keep it, I still receive this error:

No data source found named NONE (Service: AmazonDeepdish; Status Code: 404; Error Code: NotFoundException; Request ID: 4497b926-78ae-464a-bad0-f98a865baffb; Proxy: null)

Would be really nice, if somebody from the AWS Amplify team wrote at least a simple blog post about it (after almost 3 years, instead of closing tickets/bug reports).

This is a schema that I used:

type Todo @model {
  id: ID!
  name: String!
  description: String
  priority: String
}

type Query {
  batchFetchTodo(ids: [ID]): [Todo]
}

from amplify-category-api.

josefaidt avatar josefaidt commented on June 15, 2024

NOTE: look into adding env details into stash per #408

from amplify-category-api.

ejubber avatar ejubber commented on June 15, 2024

Is there any update on this?

from amplify-category-api.

zirkelc avatar zirkelc commented on June 15, 2024

I'd like to add another issue where support for environment variables is missing:

The IAM authorization for AppSync requires to add all allowed roles or usernames to the custom-roles.json:

{
  "adminRoleNames": ["my-iam-role-dev", "my-iam-role-prod"]
}

https://docs.amplify.aws/cli/graphql/authorization-rules/#use-iam-authorization-within-the-appsync-console

These roles will be copied and hardcoded into the generated auth resolvers:

#if( $util.authType() == "IAM Authorization" )
  #set( $adminRoles = ["my-iam-role-dev", "my-iam-role-prod"] )
  #foreach( $adminRole in $adminRoles )
    #if( $ctx.identity.userArn.contains($adminRole) && $ctx.identity.userArn != $ctx.stash.authRole && $ctx.identity.userArn != $ctx.stash.unauthRole )
      #return($util.toJson({}))
    #end
  #end
  #if( ($ctx.identity.userArn == $ctx.stash.authRole) || ($ctx.identity.cognitoIdentityPoolId == "eu-west-1:..." && $ctx.identity.cognitoIdentityAuthType == "authenticated") )
    #set( $isAuthorized = true )
  #end
#end

The IAM roles contain the environment variable dev or prod and we have currently no pssobility to replace this value with the correct Amplify env. It would be good to have support for the ${env} syntax that is already being supported for function resolver: https://docs.amplify.aws/cli/graphql/custom-business-logic/#reference-amplify-environment-name

The alternative presented in this issue with a pipeline resolver to add the env to ctx.stash.env and then referencing it in the IAM role as ${ctx.stash.env} might work, but requires to overwrite every single resolver.

from amplify-category-api.

gxxcastillo avatar gxxcastillo commented on June 15, 2024

Not sure if this helps anyone but I needed to know the full table names in the deployed environment and solved it by using override.ts to create a map of all model names to table names and inserting that into all of my resolvers. It's not elegant and far from optimal but it did unblock me from having to manually update every resolver every time I pushed up new changes.

The way this looked was something like:

  1. iterate over every resources.model and creating a map of modelName -> model.modelDBTable.tableName
  2. create a template string that puts all table names into the context, something like: $util.qr($ctx.stash.put("tableNames", ${JSON.stringify(tableNameMap)}))
  3. iterate over the request mapping template for the resolvers, split them by new lines into an array, insert the template string from 2 into that array, and then join the array back into a string.
  4. I then assigned that new mapping template to my resolver. models[modelName].resolvers[resolverName].requestMappingTemplate = newRequestMappingTemplate;

The end result is i could do something like "table": $ctx.stash.tableNames.{tableName} for all my tables in my TransactWriteItems operations.

from amplify-category-api.

chstrong avatar chstrong commented on June 15, 2024

I would also much appreciate the feature to pass environment variables into VTL, just as with Lambda. At the moment I'm stuck. I'm using CDK for the build. The only possibility I see is using Lambda, which is slower.

from amplify-category-api.

joekendal avatar joekendal commented on June 15, 2024

Another use case is updating an OpenSearch index (create new index -> reindex documents) and moving traffic to this new index without downtime. Without the ability to use environment variables, we need to redeploy

EDIT: use an alias instead

from amplify-category-api.

micchickenburger avatar micchickenburger commented on June 15, 2024

Another use case is when using an HTTP datasource, like for publishing SNS messages, in which case the AppSync resolver or function needs the SNS Topic ARN.

from amplify-category-api.

mobob avatar mobob commented on June 15, 2024

+1. Inline code option seems about the only option, and its rather gross. env vars or some kind of build time config please!

from amplify-category-api.

jeffshep avatar jeffshep commented on June 15, 2024

Adding a Terraform workaround for those who don't want to go down the pipeline resolver route.

Defining a UNIT JS resolver in Terraform uses the code argument to pass the filename that contains the resolver logic.
Using a templatefile function, instead of the documented file function, allows for template syntax and string substitution.

So the request function can look like:

    operation: "BatchGetItem",
    tables: {
      "${table_name}": {
        keys: ctx.args.id.map((id) => util.dynamodb.toMapValues({ id})),
        consistentRead: true,
      },
    },

And the Terraform resource can use the resource path to dynamically set the table name (e.g aws_dynamodb_table.table.name):
code = templatefile("code-directory", {table_name = aws_dynamodb_table.table.name })

from amplify-category-api.

pedroprieto avatar pedroprieto commented on June 15, 2024

So, how do you use it in a resource.ts file in Amplify data? How do you pass the table name to the javascript custom resolver?

File resource.ts:

  Item: a
    .model({
      name: a.string(),
    })
    .authorization((allow) => [allow.guest(), allow.authenticated()]),

  itemList: a
    .query()
    .arguments({
      items: a.string().array(),
      /* should I pass an extra argument in here? How do I get the table name? */
    })
    .returns(a.ref("Item").array())
    .handler(
      a.handler.custom({
        dataSource: a.ref("Item"),
        entry: "./item-list.js",
      })
    )
    .authorization((allow) => [allow.authenticated()]),

File item-list.js:

export function request(ctx) {
  return {
    operation: "BatchGetItem",
    tables: {
      TABLE_NAME: {  /* How do I get the right name in here? */
        keys: ctx.args.items.map((id) => util.dynamodb.toMapValues({ id })),
        consistentRead: true,
      },
    },
  };
}

export function response(ctx) {
  if (ctx.error) {
    util.error(ctx.error.message, ctx.error.type);
  }
  return ctx.result.data.TABLE_NAME;
}

from amplify-category-api.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.