Giter VIP home page Giter VIP logo

aws-amplify / amplify-category-api Goto Github PK

View Code? Open in Web Editor NEW
78.0 41.0 66.0 160.44 MB

The AWS Amplify CLI is a toolchain for simplifying serverless web and mobile development. This plugin provides functionality for the API category, allowing for the creation and management of GraphQL and REST based backends for your amplify project.

Home Page: https://docs.amplify.aws/

License: Apache License 2.0

Shell 0.38% TypeScript 98.22% JavaScript 1.04% Python 0.25% Dockerfile 0.01% EJS 0.08% HTML 0.02%
api aws aws-amplify aws-appsync fullstack graphql mobile-development web-development sql

amplify-category-api's Introduction

AWS Amplify

Discord Chat build:started

Reporting Bugs/Feature Requests

Open Bugs Feature Requests Enhancements Closed Issues

AWS Amplify API Category

The AWS Amplify CLI is a toolchain which includes a robust feature set for simplifying mobile and web application development. The CLI uses AWS CloudFormation and nested stacks to allow you to add or modify configurations locally before you push them for execution in your account.

This repo manages the API category within Amplify CLI. The Category is responsible for managing graphql build and transformation processes, generating resources to deploy into your cloud stack in order to compute and store data for your graphql and REST endpoints, and providing inputs to codegen processes for use later in your end application.

Install the CLI

  • Requires Node.js® version 18 or later

Install and configure the Amplify CLI as follows:

$ npm install -g @aws-amplify/cli
$ amplify configure

Note: If you're having permission issues on your system installing the CLI, please try the following command:

$ sudo npm install -g @aws-amplify/cli --unsafe-perm=true
$ amplify configure

Category specific commands:

The following table lists the current set of commands supported by the Amplify API Category Plugin.

Command Description
amplify api add Takes you through steps in the CLI to add an API resource to your backend.
amplify api add-graphql-datasource Takes you through the steps in the CLI to import an already existing Aurora Serverless data source to an existing GraphQL API resource.
amplify api update Takes you through steps in the CLI to update an API resource.
amplify api gql-compile Compiles your GraphQL schema and generates a corresponding cloudformation template.
amplify api push Provisions only API cloud resources with the latest local developments.
amplify api remove Removes an API resource from your local backend. The resource is removed from the cloud on the next push command.

Tutorials

Developing

To set up your local development environment, go to Local Environment Setup.

To test your category, do the following:

cd <your-test-front-end-project>
amplify-dev init
amplify-dev <your-category> <subcommand>

Before pushing code or sending a pull request, do the following:

  • At the command line, run yarn lint at the top-level directory. This invokes eslint to check for lint errors in all of our packages.
  • You can use yarn lint to find some of the lint errors. To attempt fix them, go to the package that has errors and run yarn lint-fix
  • If there are any remaining lint errors, resolve them manually. Linting your code is a best practice that ensures good code quality so it's important that you don't skip this step.

Contributing

We are thankful for any contributions from the community. Look at our Contribution Guidelines.

amplify-category-api's People

Contributors

aaronzylee avatar akshbhu avatar alharris-at avatar ammarkarachi avatar amplify-data-ci avatar attilah avatar cjihrig avatar danielleadams avatar dependabot[bot] avatar dpilch avatar edwardfoyle avatar elorzafe avatar haverchuck avatar jhockett avatar johnpc avatar jordanranz avatar kateglee avatar kaustavghosh06 avatar lazpavel avatar marcvberg avatar mikeparisstuff avatar mlabieniec avatar nikhname avatar palpatim avatar phani-srikar avatar sobolk avatar sundersc avatar swaysway avatar unleashedmind avatar yuth avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

amplify-category-api's Issues

Custom query depends on built input, ModelFooConnection types

Note: If your question is regarding the AWS Amplify Console service, please log it in the
official AWS Amplify Console forum

** Which Category is your question related to? **
API

** What AWS Services are you utilizing? **
AppSync, DynamoDB, Elastic Search, Cognito

** Provide additional details e.g. code snippets **
In schema.graphql :

type Location {
  lat: Float
  lon: Float
}

type Address @model @searchable {
  id: ID!
  civicNumber: String
  fullAddress: String!
  localityName: String
  localityType: String
  location: Location
  provinceCode: String
  rawJSON: String
  siteID: String
  streetDirection: String
  streetName: String
  streetType: String
  unitNumber: String
  noContact: Boolean
  noSolicit: Boolean
}

input BoundsInput {
  nw: LocationInput
  se: LocationInput
}

input LocationInput {
  lat: Float
  lon: Float
}

enum SearchableSortDirection {# Duplicate from build copy
  asc
  desc
}

enum SearchableAddressSortableFields {# Duplicate from build copy
  id
  civicNumber
  fullAddress
  localityName
  localityType
  provinceCode
  rawJSON
  siteID
  streetDirection
  streetName
  streetType
  unitNumber
  noContact
  noSolicit
}

input SearchableAddressSortInput  {# Duplicate from build copy
  field: SearchableAddressSortableFields
  direction: SearchableSortDirection
}

type ModelAddressConnection { # Duplicate from build copy
  items: [Address]
  nextToken: String
}

type Query {
  addressInBounds(bounds: BoundsInput!, sort: SearchableAddressSortInput, limit: Int, nextToken: Int): ModelAddressConnection
  nearbyAddresss(location: LocationInput!, km: Int, sort: SearchableAddressSortInput, limit: Int, nextToken: Int): ModelAddressConnection
}

input SearchableAddressSortInput, type ModelAddressConnection along with enums SearchableSortDirection, SearchableAddressSortableFields have been manually copied from the codegen created build directory copy of the schema as a workaround.

If I don't copy the type definitions, when running amplify push I receive: Type "SearchableAddressSortableFields" not found in document.

Is there a way to make custom query resources without having to duplicate these auto generated input, type, enum statements?

union and connection types

** Which Category is your question related to? **
GraphQL Transformer

** What AWS Services are you utilizing? **
AWS Appsynce GraphQL Transformer

** Provide additional details e.g. code snippets **
Are union connection types covered and how to properly annotate them?

union MediaMetaData = AudioMetaMedia

# Audio File type meta data
type AudioMetaMedia @model @searchable{
    id: ID!
    # Duration of audio in seconds
    duration: Int!
    # associated media
    media: Media! @connection(name: "MediaAudioMetaMedia")
}

# Content Media type holder
type Media @model {
    id: ID
    # Display name of media
    name: String
    # Display description of media
    description: String
    # Media file stored in S3
    data: S3Object @connection
    # Substitle file associated with media file stored in S3
    subtitle: S3Object @connection
    # Type of assets (e.g. Movie, Audio, ect)
    type: MediaType
    # Meta data associated with media file (e.g. duration for audio)
    mediaMetaData: MediaMetaData @connection(name: "MediaAudioMetaMedia")
}

Is this correct also what happens when you add another type to the union do you keep the same connection name?

RFC - @auth directive improvements

RFC - @auth directive improvements

This document will outline designs for a number of new features relating to authentication & authorization within the GraphQL Transform. The goal of these features is too fill holes and introduce new mechanisms that make protecting your valuable information easier.

Proposal 1: Replace 'queries' and 'mutations' arguments with 'operations'

Merged by aws-amplify/amplify-cli#1262

Currently an @auth directive like this:

type Task @model @auth(rules: [{allow: owner}]) {
    id: ID!
    title: String
    owner: String
}

causes these changes to the following resolvers:

  1. Query.getTask - Returns the post only if the logged in user is the post's owner.
  2. Query.listTasks - Filter items such that only owned posts are returned.
  3. Mutation.createTask - If an owner is provided via $ctx.args.input.owner and matches the identity of the logged in user, succeed. If no owner is provided, set logged in user as the owner, else fail.
  4. Mutation.updateTask - Append a conditional expression that will only update the record if the logged in user is its owner.
  5. Mutation.deleteTask - Append a conditional expression that will only delete the record if the logged in user is its owner.

In other words, the @auth directive currently protects the root level query & mutation fields that are generated for an @model type.

Problem: The 'queries' and 'mutations' arguments imply top level protection

GraphQL APIs are a graph and we need to be able to define access rules on any field, not just the top level fields.

Solution

I suggest replacing the queries and mutations arguments on the @auth directive with a single operations argument. This would be the new @auth directive definition.

directive @auth(rules: [AuthRule!]!) on OBJECT
input AuthRule {
    allow: AuthStrategy!
    ownerField: String # defaults to "owner"
    identityField: String # defaults to "cognito:username"
    groupsField: String
    groups: [String]

    # The new argument
    operations: [ModelOperation]

    # Old arguments
    queries: [ModelQuery] @deprecated(reason: "The 'queries' argument will be deprecated in the future. Please replace this argument with the 'operations' argument.")
    mutations: [ModelMutation] @deprecated(reason: "The 'mutations' argument will be deprecated in the future. Please replace this argument with the 'mutations' argument.")
}
enum AuthStrategy { owner groups }

# The new enum
enum ModelOperation { create update delete read }

# The old enums
enum ModelQuery { get list }
enum ModelMutation { create update delete }

This change generalizes the config such that it implies all read operations on that model will be protected. Not just the top level 'get' & 'list' queries. Auth rules that use the 'read' operation will be applied to top level query fields, @connection resolvers, top level fields that query custom indexes, and subscription fields. Auth rules that use the 'create', 'update', and 'delete' operation will be applied to createX, updateX, and deleteX mutations respectively. Those using queries & mutations will have the same behavior and those using operations will get the new behavior. The queries & mutations arguments will eventually be removed in a future major release.

Protect @connections by default

Merged by aws-amplify/amplify-cli#1262

Once the change from queries/mutations -> operations has been implemented, we will want to go back and implement any missing authorization logic in @connection fields by default.

For example, given this schema:

type Post @model @auth(rules: [{allow: owner}], operations: [create, update, delete, read]) {
    id: ID!
    title: String
    owner: String
}
type Blog @model {
    id: ID!
    title: String
    # This connection references type Post which has auth rules and thus should be authorized.
    posts: [Post] @connection
}

The new code would add authorization logic to the Blog.posts resolver such that only owner's of the post would be able to see the posts for a given blog. It is important to note that the new logic will restrict access such that you cannot see records that you are not supposed to see, but it will not change any index structures under the hood. You will be able to use @connection with the new custom index features to optimize the access pattern and then use @auth to protect access within that table or index.

Proposal 2: Implement @auth on @searchable search fields

Github Issues

Problem

Currently Query.searchX resolvers generated by @searchable are not protected by @auth rules.

Solution

The Elasticsearch DSL is very powerful and will allow us inject Elasticsearch query terms and implement authorization checks within Elasticsearch. This work will need to handle static & dynamic ownership and group based authorization rules. Any auth rule that includes the 'read' operation will protect the Query.searchX field.

Proposal 3: Make @auth protect subscription fields

Problem: @auth does not protect subscription fields.

type Post @model @auth(rules: [{allow: owner}]) {
    id: ID!
    title: String
    owner: String
}

Currently subscriptions are not protected automatically.

Solution

AppSync subscription queries are authorized at connect time. That means that we need to parameterize the subscription queries in such a way that any relevant authorization logic is included in the subscription query itself. In the case of ownership @auth, this means that the client must pass an owner as a query argument and the subscription resolver should verify that the logged in user and owner are the same.

For example, given this schema:

type Post @model @auth(rules: [{allow: owner}]) {
    id: ID!
    title: String
    owner: String
}

The following subscription fields would be output:

type Subscription {
    onCreatePost(owner: String): Post
    onUpdatePost(owner: String): Post
    onDeletePost(owner: String): Post
}

and when running a subscription query, the client must provide a value for the owner:

subscription OnUpdatePost($owner: String) {
    onUpdatePost(owner: $owner) {
        id
        title
    }
}

The proposed change would create a new subscription resolver for each subscription field generated by the @model. Each subscription resolver would verify the provided owner matches the logged-in identity and would fail the subscription otherwise.

There are a few limitation to this approach:

  1. There is a limit of 5 arguments per subscription field.
    • e.g. a field onCreatePost(owner: String, groups: String, otherOther: String, anotherOwner: String, anotherListOfGroups: String): Post has too many fields and is invalid. To handle this the CLI can emit a warning prompting you to customize your subscription field in the schema itself.
  2. Subscription fields are equality checked against published objects. This means that subscribing to objects with with multi-owner or multi-group auth might behave slightly differently than expected.
    • When you subscribe you will need to pass the full list of owners/groups on the item. Not just the calling identity.

As an example to point (2) above, imagine this auth rule:

type Post @model @auth(rules: [{allow: owner, ownerField: "members"}]) {
    id: ID!
    title: String
    members: [String]
}

Let's say that we want to subscribe to all new posts where I am a member.

subscription {
    onCreatePost(members: ["my-user-id"]) {
        id
        title
        members
    }
}

AppSync messages are published to subscriptions when the result of the mutation, to which the subscription field is subscribed, contains fields that equal the values provided by the subscription arguments. That means that if I were to publish a message via a mutation,

mutation {
    createPost(input: { title: "New Article", members: ["my-user-id", "my-friends-user-id"]}) {
        id
        title
        members
    }
}

the subscription started before would not be triggered because ["my-user-id", "my-friends-user-id"] is not the same as ["my-user-id"]. I bring this up for clarity but I still think this feature is useful. Single owner & group based authorization will behave as expected.

Proposal 4: Field level @auth

Merged by aws-amplify/amplify-cli#1262

Currently an @auth directive like this:

type Task @model @auth(rules: [{allow: owner}], queries: [get, list], mutations: [create, update, delete]) {
    id: ID!
    title: String
    owner: String
}

causes these changes to the following resolvers:

  1. Query.getTask - Returns the post only if the logged in user is the post's owner.
  2. Query.listTasks - Filter items such that only owned posts are returned.
  3. Mutation.createTask - If an owner is provided via $ctx.args.input.owner and matches the identity of the logged in user, succeed. If no owner is provided, set logged in user as the owner, else fail.
  4. Mutation.updateTask - Append a conditional expression that will only update the record if the logged in user is its owner.
  5. Mutation.deleteTask - Append a conditional expression that will only delete the record if the logged in user is its owner.

In other words, the @auth directive currently protects the root level query & mutation fields.

Github Issues

Problem: You cannot protect @connection resolvers

For example, look at this schema.

type Task @model {
    id: ID!
    title: String
    owner: String
    notes: [Task] @connection(name: "TaskNotes")
}
# We are trying to specify that notes should only be visible by the owner but
# we are unintentially opening access via *Task.notes*.
type Notes @model @auth(rules: [{allow: owner}]) {
    id: ID!
    title: String
    task: Task @connection(name: "TaskNotes")
    owner: String
}

Since only top level fields are protected and we do not have an @auth directive on the Task model, we are unintentionally opening access to posts via Task.notes.

Solution

We discussed having @auth rules on OBJECTs automatically protect connection fields in proposal 1, but I also suggest opening the @auth directive such that it can be placed on both FIELD_DEFINITION and OBJECT nodes. This will result in an updated definition for @auth:

directive @auth(rules: [AuthRule!]!) on OBJECT, FIELD_DEFINITION
# ...

You may then use the @auth directive on individual fields in addition to the object type definition. An @auth directive used on an @model OBJECT will augment top level queries & mutations while an @auth directive used on a FIELD_DEFINITION will protect that field's resolver by comparing the identity to the source object designated via $ctx.source.

For example, you might have:

type User @model {
    id: ID!
    username: String
    
    # Can be used to protect @connection fields.
    # This resolver will compare the $ctx.identity to the "username" attribute on the User object (via $ctx.source in the User.posts resolver).
    # In other words, we are authorizing access to posts based on information in the user object.
    posts: [Post] @connection(name: "UserPosts") @auth(rules: [{ allow: owner, ownerField: "username" }])

    # Can also be used to protect other fields
    ssn: String @auth(rules: [{ allow: owner, ownerField: "username" }])
}
# Users may create, update, delete, get, & list at the top level if they are the
# owner of this post itself.
type Post @model @auth(rules: [{ allow: owner }]) {
    id: ID!
    title: String
    author: User @connection(name: "UserPosts")
    owner: String
}

An important thing to notice is that the @auth directives compares the logged-in identity to the object exposed by $ctx.source in the resolver of that field. A side effect of (1) is that an @auth directive on a field in the top level query type doesn't have much meaning since $ctx.source will be an empty object. This is ok since @auth rules on OBJECT types handle protecting top level query/mutation fields.

Also note that the queries and mutations arguments on the @auth directive are invalid but the operations argument is allowed. The transform will validate this and fail at compile time with an error message pointing you to the mistake. E.G. this is invalid:

type User @model {
    # @auth on FIELD_DEFINITION is always protecting a field that reads data.
    # Fails with error "@auth directive used on field User.ssn cannot specify arguments 'mutations' and 'queries'"
    ssn: String @auth(rules: [{allow: owner}, mutations: [create], queries: [get]]) 

    # No one but the owner may update/delete/read their own email.
    email: String @auth(rules: [{allow: owner}, operations: [update, delete, read]) 
}

The implementation for allowing operations in field level @auth directives is a little different.

  1. create - When placed on a @model type, will verify that only the owner may pass the field in the input arg. When used on a non @model type, this does nothing.
  2. update - When placed on a @model type, will verify that only the owner may pass the field in the input arg. When used on a non @model type, this does nothing.
  3. delete - When placed on a @model type, will verify that only the owner may set the value to null. Only object level @auth directives impact delete operations so this will actually augment the update mutation and prevent passing null if you are not the owner.
  4. read - Places a resolver on the field (or updates an existing resolver in the case of @connection) that restricts access to the field. When used on a non-model type, this still protects access to the resolver.

Proposal 5: And/Or in @auth rules

Github Issues

Problem

Currently all @auth rules are joined via a top level OR operation. For example, the schema below results in rules where you can access Post objects if you are the owner OR if you are member of the "Admin" group.

type Post @model @auth(rules: [{ allow: owner }, { allow: groups, groups: ["Admin"] }]) {
    id: ID!
    title: String
    author: User @connection(name: "UserPosts")
    owner: String
}

It would be useful if you could organize these auth rules using more complex rules combined with AND and OR.

Solution

We can accomplish this by adding to the the @auth definition.

directive @auth(rules: [TopLevelAuthRule!]!) on OBJECT, FIELD_DEFINITION
input TopLevelAuthRule {
    # For backwards compat, any rule specified at the same level as an "and"/"or" will be joined via an OR.
    allow: AuthStrategy!
    ownerField: String # defaults to "owner"
    identityField: String # defaults to "cognito:username" for UserPools, "username" for IAM, "sub" for OIDC
    groupsField: String
    groups: [String]
    
    # This only exists in top level rules and specifies operations for all the rules even when combined with and/or.
    # Neseted "operations" tags are not allowed because it would confuse evaluation logic.
    operations: [ModelOperation]

    # New recursive fields on AuthRule
    and: [AuthRule]
    or: [AuthRule]   
}
input AuthRule {
    allow: AuthStrategy!
    ownerField: String # defaults to "owner"
    identityField: String # defaults to "cognito:username" for UserPools, "username" for IAM, "sub" for OIDC
    groupsField: String
    groups: [String]

    # New recursive fields on AuthRule
    and: [AuthRule]
    or: [AuthRule]
}
enum AuthStrategy { owner groups }
# Reduces get/list to read. See explanation below.
enum ModelOperation { create update delete read }

This would allow users to define advanced auth configurations like:

type User
  @model 
  @auth(rules: [{
    and: [
      { allow: owner },
      { or: [
        { allow: groups, groups: ["Admin"] },
        { allow: owner, ownerField: ["admins"] },
      }
    ],
    operations: [read]
  }]) {
  id: ID!
  admins: [String]
  owner: String
}
# Logically: ( isOwner && ( isInAdminGroup || isMemberOfAdminsField ) )

The generated resolver logic will need to be updated to evaluate the expression tree.

Proposal 6: Deny by default mode.

Github Issues

Problem: There is currently no way to specify deny access by default for Amplify APIs.

If you create an API using a schema:

type Post @model {
    id: ID!
    title: String!
}

then the generated create, update, delete, get, and list resolvers allow access to any request that includes a valid user pool token (for USER_POOL auth). This proposal will introduce a flag that specifies that all operations should be denied by default and thus all fields that do not contain an explicit auth rule will be denied. This will also change the behavior of create mutations such that the logged in user identity is never added automatically when creating objects with ownership auth.

Solution: Provide a flag that enables deny by default

By adding a DenyByDefault flag to parameters.json or transform.conf.json will allow users to specify whether fields without an @auth directive will allow access or not. When deny by default is enabled the following changes will be made.

  1. Mutation.createX resolvers will no longer auto-inject the ownership credential when the ownership credential is not provided when creating objects. Users will have to supply the ownership credential from the client and it will be validated in the mutation resolver (this happens already when you provide the ownership credential in the input).
  2. All resolvers created by a @model without an @auth directive will be denied by default.
  3. All resolvers created by @searchable on a @model without an @auth will be denied by default.
  4. All resolvers created by @connection that return a @model with an @auth AND that do not have their own field level @auth will be denied by default.

For example, with deny by default enabled

type Post @model {
    id: ID!
    title: String
    author: User @connection(name: "UserPosts")
    comments: [Comment] @connection(name: "PostComments")
}
type User @model @auth(rules: [{allow: owner}]) {
    id: ID!
    username: String!
    posts: [Post] @connection(name: "UserPosts")
}
type Comment @model {
    id: ID!
    content: String
    post: [Comment] @connection(name: "PostComments")
}

This mutation would fail:

mutation CreatePost {
    createPost(...) {
        id
        title
    }
}

This mutation would succeed:

mutation CreateUser {
    createUser(input: { username: "[email protected]" }) { # Assuming the logged in user identity is the same
        id
        title
    }
}

This top level query would succeed but Post.comments would fail.

query GetUserAndPostsAndComments {
    getUser(id: 1) { # succeeds assuming this is my user.
        posts { # succeeds because the @auth on User authorizes the child fields
            items {
                title
                comments { # fails because there is no auth rule on Post, Comment, or the Post.comments field.
                    items {
                        content
                    }
                }
            }
        }
    }
}

More details coming soon

  1. Write custom auth logic w/ pipeline functions
  2. Enable IAM auth within the API category

Request for comments

This document details a road map for authorization improvements in the Amplify CLI's API category. If there are use cases that are not covered or you have a suggestion for one of the proposals above please comment below.

In API, output in generated stack missing when no @connection

Describe the bug
In API, a schema type marked with @model will generate a stack in build/stacks
However, if there is no @connection on the type, the generated cloud formation template Outputs tag will look like

    "Outputs": {},

instead of

    "Outputs": {
        "GetAttMyObjectDataSourceName": {
            "Value": {
                "Fn::GetAtt": [
                    "MyObjectDataSource",
                    "Name"
                ]
            },
            "Export": {
                "Name": {
                    "Fn::Join": [
                        ":",
                        [
                            {
                                "Ref": "AppSyncApiId"
                            },
                            "GetAtt",
                            "MyObjectDataSource",
                            "Name"
                        ]
                    ]
                }
            }
        }
    },

To Reproduce
Steps to reproduce the behavior:

  1. Add api
  2. Create a schema with one @model type, no @connection
  3. Push
  4. Check the stack

Expected behavior
The Outputs data source name should still be there, as it is necessary information to add a custom resolver that reference this data source.

Custom data source with @model (existing DynamoDB table)

Which Category is your question related to?

GraphQL Schema Transformer

What AWS Services are you utilizing?

DynamoDB, AppSync

Provide additional details e.g. code snippets

I already have a number of DynamoDB tables for which I would like to build a GraphQL API. The Amplify CLI's automated creation of queries and mutations is very appealing, as I would assume that I would be able to describe my model in schema.graphql and then run amplify api gql-compile to create all of the Resolvers and Data Sources. Is this at all possible? I can find no examples of arguments being passed to the @model decorator. I do see that in the GraphQL Transformer Tutorial it mentions a prompt for using your own table when adding an API:

# When asked if you want to use your own Amazon DynamoDB tables, choose **No**.

However, when creating an API with CLI version 0.1.16 I do not get such a prompt.

Limit entities per user

** Which Category is your question related to? **
Auth, API
** What AWS Services are you utilizing? **
Cognito, AppSync
** Provide additional details e.g. code snippets **
Is there a way to limit the amount of entities a user can create?

Example 1.

Let's say I want to maintain a settings object that has information about the users choices:

type UserSettings @model @auth(rules: [{allow: owner}]) {
  improvesApp: Bool!
  acceptedPrivacyPolicy: Bool!
  preferredTheme: String!
}

Is there a way to ensure that only one of these entities can be created per user?

Example 2:

I created a SaaS software that limits the usage of the owner based on their tier. Let's say its a Todo app startup. Is there a way to limit the models the user can create?

Preferred solution:
A great way would be if there was a @limit directive for this. Basically for example 1:

type UserSettings @model @auth(rules: [{allow: owner}]) @limit(field: owner, amount: 1) {
  improvesApp: Bool!
  acceptedPrivacyPolicy: Bool!
  preferredTheme: String!
}

And for example 2 (using Cognito groups).

type Todo @model @auth(rules: [{allow: owner}])
  @limit(field: owner, limit: 10)
  @limit(field: owner, group: 'Paidtier', limit: 1000)
  @limit(field: owner, group: 'Premiumtier', limit: null)
{
  id: ID!
  title: String!
}

Example two could result in every user having a limit of 10 Todos, except if they are in the paid tier, then they can create 1000 except when they are in the premium tier, then they could create infinitely many.

GraphQL Codegen max_length directive - Input validation

Is your feature request related to a problem? Please describe.
I'm always frustrated when users abuse my API and circumvent my form validation by manually triggering API calls.
To be honest that didn't happen yet, but it could.

Describe the solution you'd like
It would be cool to be able to restrict API access by max length of the data similar to how @auth restricts on an owner basis.

E.g.

type Blog @model {
  id: ID!
  name: String! @max_length(100)
  posts: [Post] @connection(name: "BlogPosts") @max_length(1000)
}

Would result in requests that have longer name throwing an error. And each blog could only have a thousand posts (which would be bad UX, but its just an example 😄 ).

Describe alternatives you've considered
The only alternative I could think of would be to write custom resolvers that check the length of the data in the .req.vtl template.

Calling 'amplify api update/add/remove' (REST) overrides custom Access-Control-Allow-Headers

Describe the bug
I need to pass a custom header in the Access-Control-Allow-Headers of my Rest API.
As amplify-cli doesn't allow me to do this through the command line, I modify the API '*-cloudformation-template.json' file manually to add my custom header for each route.

But when I call 'amplify api update' or 'amplify api add' or 'amplify api remove' (or any command modifying this file), every '"method.response.header.Access-Control-Allow-Headers"' lines returns to their default value, and my custom header is lost

To Reproduce
Add a custom header in your API '*-cloudformation-template.json' like this :

"x-amazon-apigateway-integration": { "responses": { "default": { ..., "responseParameters": { ..., "method.response.header.Access-Control-Allow-Headers": "'Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token,X-Amz-User-Agent,X-My-Custom-Header'", ..., } } }, ... }

Then call 'amplify api update' and update a path without changing anything.

Expected behavior
The custom header should be kept.

Desktop (please complete the following information):

  • OS: Windows
  • Amplify-cli version: 1.6.11

Customize StreamingLambda with @searchable

Note: If your question is regarding the AWS Amplify Console service, please log it in the
official AWS Amplify Console forum

** Which Category is your question related to? **

AppSync with @searchable

** What AWS Services are you utilizing? **

appsync elasticsearch service

** Provide additional details e.g. code snippets **

Hi :) I would like to customize ElasticSearchStreamingLambda?When I add following schema, lambda was created with python code (ElasticSearchStreamingLambdaFunction.zip).

type Test @model @searchable {
  id: ID!
}

So I would like to use nodejs or golang and customize lambda code.
Is it possible now?? I can not find better way yet...

Should I edit cloudformation-template.json in the manual?

Appsync resolver to have access to ENV variable

Is your feature request related to a problem? Please describe.
When creating a BatchPutItem template, I have to provide the table name for the batch operation.
Since I create my table via Cloudformation and with an ENV value attached, I cannot use the BatchPutItem since I don't have access to the current Environment value in the resolver template.

A workaround I am using right now is the first call a Lambda in a pipeline resolver and passing on the Environment value from the first function to the second that does the batchPutItem.
However, this is kinda unnecessary and requires an extra lambda call while the ENV value should be available during runtime. It looks like it is just not exposed via $ctx.

Describe the solution you'd like
Expose the environment value in the $ctx object.

Geo search

At this moment there is no possibility to search via graphql / dynamodb certain records based on the longitude and latitude (geo) while there is a kind of library within AWS for this:
https://github.com/amazon-archives/dynamodb-geo

Can we make these functionalities also available via Amplify:
Box Queries: Return all of the items that fall within a pair of geo points that define a rectangle as projected onto a sphere.
Radius Queries: Return all of the items that are within a given radius of a geo point.
Basic CRUD Operations: Create, retrieve, update, and delete geospatial data items.
Easy Integration: Adds functionality to the AWS SDK for Java in your server application.
Customizable: Access to raw request and result objects from the AWS SDK for Java.

Custom resolver: should not worry about resolver resource

Is your feature request related to a problem? Please describe.
According to doc https://aws-amplify.github.io/docs/cli/graphql#add-a-custom-resolver-that-targets-a-dynamodb-table-from-model, users can add a custom resolver by:

  1. Add a resolver resource to a stack in the stacks/ directory.
  2. Add a resolver template in resolvers/ directory.

Doing above has the same effect as adding a custom resolver in AppSync console, but it gains the benefit of version control.

However, I do notice that adding a custom resolver in AppSync console doesn't ask me to add a resolver resource manually. That's why I propose: adding a custom resolver via resources/ should not, either.

Describe the solution you'd like
Write the resolver file in VTL only, and do not have to worry about implementing the resolver resource file.

Automatic scan operation created by CLI for DynamoDB REST API

Is your feature request related to a problem? Please describe.
I'd like to scan an entire table using an API.get operation when creating a new REST API.

Describe the solution you'd like
Right now the Amplify CLI creates operations agains the DynamoDB data source automatically. Right now the only way to fetch a list of items is to perform a query. I think it would be useful to also support a scan operation out of the box.

Describe alternatives you've considered
Possibly just writing it myself but it would be nice to have it automated, especially useful for demos & tutorials.

@searchable option for encrypted Elasticsearch

The @searchable directive creates an unencrypted at rest elasticsearch domain. There is no way to change the encryption after the domain is created so can we get an option to encrypt elasticsearch at rest similar to the recently added option to enable DynamoDB encryption at rest? Thanks

Use the @Connection directive to enable sequential/pipeline creation

Is your feature request related to a problem? Please describe.
[API / GraphQL] AFAIK, it currently requires two graphql requests to create an object (eg. Post) and create connected objects (eg. Comments, where the schema is post.comments: [Comment!], and the second request uses the id from createPost as commentPostId)

Describe the solution you'd like
It would be great if the @connection directive generated mutations to create connected (many) objects in the same request as the main object. The request might look something like this:

  createPost($input:{
      title: "Some Title",
      comments: [
        { text: "Hello" },
        { text: "Second Comment"}
     ] 
  },
  id
  title
  comments {
    items {
      id
      text
    }
  }
)

The transformer would create Post and then would create each of the comments and inject Post.id into the commentPostId field of each of the comments. I believe that this could be implemented without much additional complexity because most of the hard work has already been done by the @connection directive.

Describe alternatives you've considered
Sequential requests. Manually editing the backend to use a Pipeline Transformer.

Support subdirs into API/resolvers

Is your feature request related to a problem? Please describe.
I would like to build complex graphql and custom resolvers and store them into subfolders. Right now when I try to push resolvers tree(I use custom AppSync::Resolver with right template location) I get

EISDIR: illegal operation on a directory, read  

Describe the solution you'd like
Generate list of resolvers through fs tree traversal, https://github.com/aws-amplify/amplify-cli/blob/a61237ff51a26fbf93ee423b43a34d89c06acf57/packages/graphql-transformer-core/src/util/amplifyUtils.ts#L234
I can provide PR if needs

Adding validation to fields in graphql api schema

** Which Category is your question related to? **
Amplify GraphQL API

Is it possible to add validation to fields in the schema file? So min/max length, min/max value etc? If I had a field on a Person type and wanted to limit their name field to 250 characters, is that possible using the schema file only? Otherwise how can this be done?

Allow us to overwrite the generated GraphQL fields, e.g. createPost

Is your feature request related to a problem? Please describe.
Although this isn't a "blocking issue", I'd like to be able to attach the new @function directive to the generated fields.

My current solution is, of course, to make a custom field with a different name, e.g. customCreatePost or createPostLambda.

Describe the solution you'd like
Under the non-build schema.graphql I'd like to be able to "redeclare" the generated fields; especially with the @function directive.

Describe alternatives you've considered
I've looked into both the @function directive and custom cloud-formation/custom-resources approaches and neither provide a solution as far as I can tell.

appsync manual configured to automated backend

** Which Category is your question related to? **
Appsync, enhancement or feature request
** What AWS Services are you utilizing? **
amplify, appsync, cognito, dynamodb
** Provide additional details e.g. code snippets **

I have manually configured appsync. The reason is i was on aws mobile before amplify was launched. I tried switching over. i started with scratch but with same dynamodb tables and schema file available in appsync console. I want to able to update the appsync schema in from client side using amplify but unable to do it. I tried downloading the schema, changing it and tried pushing but nothing changes. Is this a bug or a feature request to change from manual to automated appsync schema updates? Or am I doing this wrong?

RFC - Pipeline Resolver Support

Pipeline Resolvers Support

This RFC will document a process to transition the Amplify CLI to use AppSync pipeline resolvers. The driving use case for this feature is to allow users to compose their own logic with the logic that is generated by the GraphQL Transform. For example, a user might want to authorize a mutation that creates a message by first verifying that the user is enrolled in the message's chat room. Other examples include adding custom input validation or audit logging to @model mutations. This document is not necessarily final so please leave your comments so we can address any concerns.

Github Issues

Proposal 1: Use pipelines everywhere

Back in 2018, AppSync released a feature called pipeline resolvers. Pipeline resolvers allow you to serially execute multiple AppSync functions within the resolver for a single field (not to be confused with AWS Lambda functions). AppSync functions behave similarly to old style AppSync resolvers and contain a request mapping template, a response mapping template, and a data source. A function may be referenced by multiple AppSync resolvers allowing you to reuse the same function for multiple resolvers. The AppSync resolver context ($ctx in resolver templates) has also received a new stash map that lives throughout the execution of a pipeline resolver. You may use the $ctx.stash to store intermediate results and pass information between functions.

The first step towards supporting pipeline resolvers is to switch all existing generated resolvers to use pipeline resolvers. To help make the generated functions more reusable, each function defines a set of arguments that it expects to find in the stash. The arguments for a function are passed by setting a value in the $ctx.stash.args under a key that matches the name of the function. Below you can read the full list of functions that will be generated by different directives.

Generated Functions

Function: CreateX

Generated by @model and issues a DynamoDB PutItem operation with a condition expression to create records if they do not already exist.

Arguments

The CreateX function expects

{
    "stash": {
        "args": {
            "CreateX": {
                "input": {
                    "title": "some title",
                },
                "condition": {
                    "expression": "attribute_not_exists(#id)",
                    "expressionNames": {
                        "#id": "id"
                    },
                    "expressionValues": {}
                }
            }
        }
    }
}

Function: UpdateX

Generated by @model and issues a DynamoDB UpdateItem operation with a condition expression to update if the item exists.

Arguments

The UpdateX function expects

{
    "stash": {
        "args": {
            "UpdateX": {
                "input": {
                    "title": "some other title",
                },
                "condition": {
                    "expression": "attribute_exists(#id)",
                    "expressionNames": {
                        "#id": "id"
                    },
                    "expressionValues": {}
                }
            }
        }
    }
}

Function: DeleteX

Generated by @model and issues a DynamoDB DeleteItem operation with a condition expression to delete if the item exists.

Arguments

The UpdateX function expects

{
    "stash": {
        "args": {
            "DeleteX": {
                "input": {
                    "id": "123",
                },
                "condition": {
                    "expression": "attribute_exists(#id)",
                    "expressionNames": {
                        "#id": "id"
                    },
                    "expressionValues": {}
                }
            }
        }
    }
}

Function: GetX

Generated by @model and issues a DynamoDB GetItem operation.

Arguments

The UpdateX function expects

{
    "stash": {
        "args": {
            "GetX": {
                "id": "123"
            }
        }
    }
}

Function: ListX

Generated by @model and issues a DynamoDB Scan operation.

Arguments

The ListX function expects

{
    "stash": {
        "args": {
            "ListX": {
                "filter": {
                    "expression": "",
                    "expressionNames": {},
                    "expressionValues": {}
                },
                "limit": 20,
                "nextToken": "some-next-token"
            }
        }
    }
}

Function: QueryX

Generated by @model and issues a DynamoDB Query operation.

Arguments

The QueryX function expects

{
    "stash": {
        "args": {
            "QueryX": {
                "query": {
                    "expression": "#hashKey = :hashKey",
                    "expressionNames": {
                        "#hashKey": "hashKeyAttribute",
                        "expressionValues": {
                            ":hashKey": {
                                "S": "some-hash-key-value"
                            }
                        }
                    }
                },
                "scanIndexForward": true,
                "filter": {
                    "expression": "",
                    "expressionNames": {},
                    "expressionValues": {}
                },
                "limit": 20,
                "nextToken": "some-next-token",
                "index": "some-index-name"
            }
        }
    }
}

Function: AuthorizeCreateX

Generated by @auth when used on an OBJECT.

Arguments

The AuthorizeCreateX function expects no additional arguments. The AuthorizeCreateX function will look at $ctx.stash.CreateX.input and validate it against the $ctx.identity. The function will manipulate $ctx.stash.CreateX.condition such that the correct authorization conditions are added.


Function: AuthorizeUpdateX

Generated by @auth when used on an OBJECT.

Arguments

The AuthorizeUpdateX function expects no additional arguments. The AuthorizeUpdateX function will look at $ctx.stash.UpdateX.input and validate it against the $ctx.identity. The function will manipulate $ctx.stash.UpdateX.condition such that the correct authorization conditions are added.


Function: AuthorizeDeleteX

Generated by @auth when used on an OBJECT.

Arguments

The AuthorizeDeleteX function expects no additional arguments. The AuthorizeDeleteX function will look at $ctx.stash.DeleteX.input and validate it against the $ctx.identity. The function will manipulate $ctx.stash.DeleteX.condition such that the correct authorization conditions are added.


Function: AuthorizeGetX

Generated by @auth when used on an OBJECT.

Arguments

The AuthorizeGetX function expects no additional arguments. The AuthorizeGetX function will look at $ctx.stash.GetX.result and validate it against the $ctx.identity. The function will return null and append an error if the user is unauthorized.


Function: AuthorizeXItems

Filters a list of items based on @auth rules placed on the OBJECT. This function can be used by top level queries that return multiple values (list, query) as well as by @connection fields.

Arguments

The AuthorizeXItems function expects $ctx.prev.result to contain a list of "items" that should be filtered. This function returns the filtered results.


Function: HandleVersionedCreate

Created by the @versioned directive and sets the initial value of an objects version to 1.

Arguments

The HandleVersionedCreate function augments the $ctx.stash.CreateX.input such that it definitely contains an initial version.


Function: HandleVersionedUpdate

Created by the @versioned directive and updates the condition expression with version information.

Arguments

The HandleVersionedUpdate function uses the $ctx.stash.UpdateX.input to append a conditional update expression to $ctx.stash.UpdateX.condition such that the object is only updated if the versions match.


Function: HandleVersionedDelete

Created by the @versioned directive and updates the condition expression with version information.

Arguments

The HandleVersionedDelete function uses the $ctx.stash.DeleteX.input to append a conditional update expression to $ctx.stash.DeleteX.condition such that the object is only deleted if the versions match.


Function: SearchX

Created by the @searchable directive and issues an Elasticsearch query against your Elasticsearch domain.

Arguments

The SearchX function expects a single argument "params".

{
    "stash": {
        "args": {
            "SearchX": {
                "params": {
                    "body": {
                        "from": "",
                        "size": 10,
                        "sort": ["_doc"],
                        "query": {
                            "match_all": {}
                        }
                    }
                }
            }
        }
    }
}

Generated Resolvers

The @model, @connection, and @searchable directives all add resolvers to fields within your schema. The @versioned and @auth directives will only add functions to existing resolvers created by the other directives. This section will look at the resolvers generated by the @model, @connection, and @searchable directives.

@model resolvers

type Post @model {
    id: ID!
    title: String
}

This schema will create the following resolvers:


Mutation.createPost

The Mutation.createPost resolver uses its own RequestMappingTemplate to setup the $ctx.stash such that it's pipeline is parameterized to return the correct results.

Mutation.createPost.req.vtl

#set($ctx.stash.args = {})
#set($ctx.stash.args.CreatePost = {
    "input": $ctx.args.input
})

Function 1: CreatePost

The function will insert the value provided via $ctx.stash.CreatePost.input and return the results.

Mutation.createPost.res.vtl

Return the result of the last function in the pipeline.

$ctx.prev.result

Mutation.updatePost

The Mutation.updatePost resolver uses its own RequestMappingTemplate to setup the $ctx.stash such that it's pipeline is parameterized to return the correct results.

Mutation.updatePost.req.vtl

#set($ctx.stash.args = {})
#set($ctx.stash.args.UpdatePost = {
    "input": $ctx.args.input
})

Function 1: UpdatePost

The function will update the value provided via $ctx.stash.UpdatePost.input and return the results.

Mutation.updatePost.res.vtl

Return the result of the last function in the pipeline.

$ctx.prev.result

Mutation.deletePost

The Mutation.deletePost resolver uses its own RequestMappingTemplate to setup the $ctx.stash such that it's pipeline is parameterized to return the correct results.

Mutation.deletePost.req.vtl

#set($ctx.stash.args = {})
#set($ctx.stash.args.DeletePost = {
    "input": $ctx.args.input
})

Function 1: DeletePost

The function will delete the value designated via $ctx.stash.DeletePost.input.id and return the results.

Mutation.deletePost.res.vtl

Return the result of the last function in the pipeline.

$ctx.prev.result

Query.getPost

The Query.getPost resolver uses its own RequestMappingTemplate to setup the $ctx.stash such that it's pipeline is parameterized to return the correct results.

Query.getPost.req.vtl

#set($ctx.stash.args = {})
#set($ctx.stash.args.GetPost = {
    "id": $ctx.args.id
})

Function 1: GetPost

The function will get the value designated via $ctx.stash.GetPost.id and return the results.

Query.getPost.res.vtl

Return the result of the last function in the pipeline.

$ctx.prev.result

Query.listPosts

The Query.listPosts resolver uses its own RequestMappingTemplate to setup the $ctx.stash such that it's pipeline is parameterized to return the correct results.

Query.listPosts.req.vtl

#set($ctx.stash.args = {})
#set($ctx.stash.args.ListPosts = {
    "filter": $util.transform.toDynamoDBFilterExpression($ctx.args.filter),
    "limit": $ctx.args.limit,
    "nextToken": $ctx.args.nextToken
})

Function 1: ListPosts

The function will get the value designated via $ctx.stash.ListPosts.id and return the results.

Query.listPosts.res.vtl

Return the result of the last function in the pipeline.

$ctx.prev.result

@connection resolvers

type Post @model {
    id: ID!
    title: String
    comments: [Comment] @connection(name: "PostComments")
}
type Comment @model {
    id: ID!
    content: String
    post: Post @connection(name: "PostComments")
}

The example above would create the following resolvers


Post.comments

The Post.comments resolver uses its own RequestMappingTemplate to setup the $ctx.stash such that it's pipeline is parameterized to return the correct results.

Post.comments.req.vtl

#set($ctx.stash.args = {})
#set($ctx.stash.args.QueryComments = {
    "query": {
        "expression": "#connectionAttribute = :connectionAttribute",
        "expressionNames": {
            "#connectionAttribute": "commentPostId"
        },
        "expressionValues": {
            ":connectionAttribute": {
                "S": "$ctx.source.id"
            }
        }
    },
    "scanIndexForward": true,
    "filter": $util.transform.toDynamoDBFilterExpression($ctx.args.filter),
    "limit": $ctx.args.limit,
    "nextToken": $ctx.args.nextToken,
    "index": "gsi-PostComments"
})

Function 1: QueryPosts

The function will get the values designated via $ctx.stash.QueryPosts and return the results.

Post.comments.res.vtl

Return the result of the last function in the pipeline.

$ctx.prev.result

Comment.post

The Comment.post resolver uses its own RequestMappingTemplate to setup the $ctx.stash such that it's pipeline is parameterized to return the correct results.

Comment.post.req.vtl

#set($ctx.stash.args = {})
#set($ctx.stash.args.GetPost = {
    "id": "$ctx.source.commentPostId"
})

Function 1: GetPost

The function will get the values designated via $ctx.stash.GetPost and return the results.

Comment.post.res.vtl

Return the result of the last function in the pipeline.

$ctx.prev.result

@searchable resolvers

type Post @model @searchable {
    id: ID!
    title: String
}

Query.searchPosts

The Query.searchPosts resolver uses its own RequestMappingTemplate to setup the $ctx.stash such that it's pipeline is parameterized to return the correct results.

Query.searchPosts.req.vtl

#set($ctx.stash.args = {})
#set($ctx.stash.args.SearchPosts = {
    "query": $util.transform.toElasticsearchQueryDSL($ctx.args.filter),
    "sort": [],
    "size": $context.args.limit,
    "from": "$context.args.nextToken"
})

Function 1: SearchPosts

The function will get the values designated via $ctx.stash.GetPost and return the results.

Comment.post.res.vtl

Return the result of the last function in the pipeline.

$ctx.prev.result

@auth resolvers

The @auth directive does not add its own resolvers but will augment the behavior of existing resolvers by manipulating values in the $ctx.stash.

  • Mutation.createX - @auth will add logic to the request mapping template of the resolver that injects a condition into $ctx.stash.CreateX.condition
  • Mutation.updateX - @auth will add logic to the request mapping template of the resolver that injects a condition into $ctx.stash.UpdateX.condition
  • Mutation.deleteX - @auth will add logic to the request mapping template of the resolver that injects a condition into $ctx.stash.DeleteX.condition
  • Query.getX - @auth will add logic to the response mapping template of the resolver that will return the value if authorized.
  • Query.listX - @auth will add logic to the response mapping template of the resolver that will filter $ctx.prev.result.items based on the auth rules.
  • Query.searchX - @auth will add logic to the response mapping template of the resolver that will filter $ctx.prev.result.items based on the auth rules.
  • Query.queryX - @auth will add logic to the response mapping template of the resolver that will filter $ctx.prev.result.items based on the auth rules.
  • Model.connectionField - @auth will add logic to the response mapping template of the resolver that will filter $ctx.prev.result.items based on the auth rules.

@versioned resolvers

The @versioned directive does not add its own resolver but will augment the behavior of existing resolvers by manipulating values in the $ctx.stash.

  • Mutation.createX - @versioned will add logic to the request mapping template of the resolver that injects a condition into $ctx.stash.CreateX.condition
  • Mutation.updateX - @versioned will add logic to the request mapping template of the resolver that injects a condition into $ctx.stash.UpdateX.condition
  • Mutation.deleteX - @versioned will add logic to the request mapping template of the resolver that injects a condition into $ctx.stash.DeleteX.condition

Proposal 2: The @before and @after directives

There are many possibilities for how to expose pipeline functions via the transform. Defining a function of your own requires a request mapping template, response mapping template, and a data source. Using a function requires that you place that function, in order, within a pipeline resolver. Any directive(s) introduced would need to be able to accomodate both of these requirements. Here are a few options for discussion.

Before & After directives for adding logic to auto-generated model mutations

The main use case for this approach is to add custom authorization/audit/etc logic to mutations that are generated by the Amplify CLI. For example, you might want to lookup that a user is a member of a chat room before they can create a message. Currently this design only supports mutations but if you have suggestions for how to generalize this for read operations, comment below.

directive @before(mutation: ModelMutation!, function: String!, datasource: String!) ON OBJECT
directive @after(mutation: ModelMutation!, function: String!, datasource: String!) ON OBJECT
enum ModelMutation {
    create
    update
    delete
}

Which would be used like so:

# Messages are only readable via @connection fields.
# Message mutations are pre-checked by a custom function.
type Message 
  @model(queries: null)
  @before(mutation: create, function: "AuthorizeUserIsChatMember", datasource: "ChatRoomTable")
{
    id: ID!
    content: String
    room: Room @connection(name: "ChatMessages")
}
type ChatRoom @model @auth(rules: [{ allow: owner, ownerField: "members" }]) {
    id: ID!
    messages: [Message] @connection(name: "ChatMessages")
    members: [String]
}

To implement your function logic, you would drop two files in resolvers/ called AuthorizeUserIsChatMember.req.vtl & AuthorizeUserIsChatMember.res.vtl:

## AuthorizeUserIsChatMember.req.vtl **
{
    "operation": "GetItem",
    "key": {
        "id": "$ctx.args.input.messageRoomId"
    }
}

## AuthorizeUserIsChatMember.res.vtl **
#if( ! $ctx.result.members.contains($ctx.identity.username) )
  ## If the user is not a member do not allow the CreatePost function to be called next. ** 
  $util.unauthorized()
#else
  ## Do nothing and allow the CreatePost function to be called next. **
  $ctx.result
#end

The @before directive specifies which data source should be called and the order of the functions could be determined by the order of the @before directives on the model. The @after directive would work similarly except the function would run after the generated mutation logic.

Audit mutations with a single AppSync function

type Message
  @model(queries: null)
  @after(mutation: create, function: "AuditMutation", datasource: "AuditTable")
{
    id: ID!
    content: String
}
# The Audit model is not exposed via the API but will create a table 
# that can be used by your functions.
type Audit @model(queries: null, mutations: null, subscriptions: null) {
    id: ID!
    ctx: AWSJSON
}

You could then use function templates like this:

## AuditMutation.req.vtl **
## Log the entire resolver ctx to a DynamoDB table **
#set($auditRecord = {
    "ctx": $ctx,
    "timestamp": $util.time.nowISO8601()
})
{
    "operation": "PutItem",
    "key": {
        "id": "$util.autoId()"
    },
    "attributeValues": $util.dynamodb.toMapValuesJson($auditRecord)
}

## AuditMutation.res.vtl **
## Return the same value as the previous function **
$util.toJson($ctx.prev.result)

Request for comments

The goal is to provide simple to use and effective abstractions. Please leave your comments with questions, concerns, and use cases that you would like to see covered.

MultiEnv and Dynamically get DynamoDB TableName in a BatchGetItem request mapping template

Is your feature request related to a problem? Please describe.
Batch operations in custom resolvers doesn't play well with multi env feature.

Describe the solution you'd like
When using a BatchGetItem operation in a custom resolver I need to hardcode the table name, like this:

{
  "version" : "2018-05-29",
  "operation" : "BatchGetItem",
  "tables" : {
    "MYTAblename": {
      "keys": $util.toJson($ids),
      "consistentRead": true
    }
  }
}

However, this creates a problem when I'm using a multi env set up.
I haven't found in the docs and internet any way of getting the tablename from inside a resolver mapping template.

Any Ideas on how to work around this?
Thanks.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Ability to rename model/field without losing data

Is your feature request related to a problem? Please describe.
In almost every project, you eventually want to rename a model or model field because you later have a deeper understanding of the domain and how it should be modeled.

Describe the solution you'd like
I think graph.cool's temporary @rename directive works very well.

## Renaming the `Post` type to `Story`, and its `text` field to `content`
type Story @model @rename(oldName: "Post") {
  content: String @rename(oldName: "text")
}

After deploying the change, you have to manually remove the @rename directive.

Describe alternatives you've considered
None

Error in GQL Compile re Many-to-Many Detection

Describe the bug
When setting up a reflexive belongs-to connection, the order of the fields can trick the compiler into thinking that it's a many-to-many connection.

To Reproduce
If you create a model like so:

type Task @model @searchable @versioned {
  id: ID!
  blockedBy: Task @connection(name: "BlockedBy")
  blocks: [Task] @connection(name: "BlockedBy")

the compiler will work fine.

However, if you change the order of blocks and blockedBy the compiler will think that it's a many-to-many and throw an error.

This model, for example, will not compile:

type Task @model @searchable @versioned {
  id: ID!
  blocks: [Task] @connection(name: "BlockedBy")
  blockedBy: Task @connection(name: "BlockedBy")

Expected behavior
Distinguish between many-to-many connections and belongs-to connections properly regardless of field order.

Desktop (please complete the following information):

  • OS: MacOS 10.14.4
  • Amplify Version 1.1.4

Smartphone (please complete the following information):
n/a

[GraphQL Transform] Filtering with multiple many-to-many relationships

** Which Category is your question related to? **
GraphQL Transform

** What AWS Services are you utilizing? **
Appsync, Amplify

** Provide additional details e.g. code snippets **
If I have a schema like this

type Post @model {
  id: ID!
  title: String!
  editors: [PostEditor] @connection(name: "PostEditors")
  categories: [PostCatoegory] @connection(name: "PostCategories")
}

type PostEditor {
  id: ID!
  post: Post! @connection(name: "PostEditors")
  editor: User! @connection(name: "UserEditors")
}
type User @model {
  id: ID!
  username: String!
  posts: [PostEditor] @connection(name: "UserEditors")
}

type PostCatoegory {
  id: ID!
  post: Post! @connection(name: "PostCategories")
  category: Category! @connection(name: "CategoryType")
}
type Category @model {
  id: ID!
  categoryName: String!
  posts: [PostCatoegory] @connection(name: "CategoryType")
}

This is how I can get the posts' titles that are posted by the user "ibrahim"

    query {
    	listUsers(
        filter:{
          username:{
            eq: "ibrahim"
          }
        })
      {
        items{
          posts{
            items{
              post{
                title
              }
            }
          }
        }
      }
    }

And I will do a similar thing to get the posts which have a specific categoryName, but my question is how can I get the posts that are posted by let's say "ibrahim" and have a categoryName "art" ?

I have spent a lot of time trying to solve this issue and will appreciate your help

Support different authentication types for a REST API

So, I have used amplify api add and now have GraphQL and REST API in my project, but the auth for my REST API is using Cognito. How can I change that? I would like to use API Key authentication for my REST API and keep using Cognito for the rest. I can not find any reference in the doc and by running amplify auth add again I get the message:

Auth has already been added to this project. To update run amplify update auth.

If this is currently not supported going through some amplify-cli command or by editing some CloudFormation template, then it can be a candidate for a feature request.

Is there way to disable only ElasticSearchStreamingLambda

Note: If your question is regarding the AWS Amplify Console service, please log it in the
official AWS Amplify Console forum

** Which Category is your question related to? **

** What AWS Services are you utilizing? **

** Provide additional details e.g. code snippets **

Is there way to disable only ElasticSearchStreamingLambda? I would like to create and manage elasticsearch service with @searchable. However I would like to disable only ElasticSearchStreamingLambda this is because I would like to POST to elasticsearch by myself.

Is there a simple way to get a list of items by a set of ids?

** Which Category is your question related to? **
API

** What AWS Services are you utilizing? **
AppSync, Amplify

** Provide additional details e.g. code snippets **
Is there a simple way to write a resolver to get a list of items from a table by passing in a set of ids?

I'm currently solving this problem by building a filter expression string rather manually (this is a function for a pipeline resolver):

#set( $expValues = {} )
#set( $expression = "#id IN (" )
#set( $ids = $ctx.stash.get("ids"))

#set( $index = 0 )

#foreach( $id in $ids )
    #set( $index = $index + 1 )
    #if( $ids.size() == $index )
        #set( $expression = "${expression} :id${index})" )
    #else
        #set( $expression = "${expression} :id${index}, " )
    #end
    $util.qr( $expValues.put(":id${index}", { "S" : "${id}" }) )
#end

{
  "operation" : "Scan",
  "filter" : {
      "expression" : "${expression}",
      "expressionNames": {
        "#id" : "id"
      },
      "expressionValues" : $util.toJson($expValues)
  }
}

Is this the best way? For example, I noticed there was a utility, $util.transform.toDynamoDBFilterExpression and tried implementing it as suggested from @mikeparisstuff 's answer on StackOverflow: https://stackoverflow.com/questions/52046495/util-transform-todynamodbfilterexpression

But I kept getting errors in the response about $util and expecting 'null', 'true', or 'false'. So I had to resort to the above approach.

It would be nice to have a better way to build complex queries as DynamoDB-JSON is a bit painful to write. I was hoping to be able to use similar filters (with OR: [{ id: "123" }, {id: "124"}] -type constructs that AppSync/Amplify already provides at the API level, but at the resolver level.

how to add a RequestTemplate to enrich REST API requests to Lambda with Cognito UserPool user details (like username and user ID)

** Which Category is your question related to? **

API, Auth

** What AWS Services are you utilizing? **

Lambda, API Gateway, Cognito User Pools

** Provide additional details e.g. code snippets **

I'd like to have access to information about the Cognito User when they issue a call to an authenticated Serverless Express REST API created via amplify add api. I've seen that Request Templates should be able to be used to inject User claims information into the request. I'm having difficulty finding which part of my myapi-cloudformation-template.json I should add this to.

Ultimately I'd like to be able to write code similar to:

app.post('/posts', async (req, res, next) => {
  try {
    const cognito = new aws.CognitoIdentityServiceProvider()
    const email = req.apiGateway.event.requestContext.user.email
    const myCustomAttribute = req.apiGateway.event.requestContext.user['custom:myCustomAttribute']

    //TODO use the above values for stuff
  } catch (err) {
    next(err)
  }
})

Make export name of the stack's output reachable

Is your feature request related to a problem? Please describe.
Currently, there is no easy way to get an output from a stack. It is because the export name is generated using the current stack's name, and we end up with a large name that changes for each environment we have. This makes getting the output of a stack impossible.

For example:
The generated API CloudFormation template has this output defined

"Outputs": {
    "GraphQLAPIIdOutput": {
        "Description": "Your GraphQL API ID.",
        "Value": {
            "Fn::GetAtt": [
                "GraphQLAPI",
                "ApiId"
            ]
        },
        "Export": {
            "Name": {
                "Fn::Join": [
                    ":",
                    [
                        {
                            "Ref": "AWS::StackName"
                        },
                        "GraphQLApiId"
                    ]
                ]
            }
        }
    }
}

The resulting export name is something ParentStackName-ResourceName-AWSUniqueID. We can not hardcoded that name in another stack, for example, a function, because it will fails when deploying new env if the old stack is removed, or we end up pointing to a service in another stack.

Describe the solution you'd like
From the parent stack(amplify/backend/awscloudformation/nested-cloudformation-stack.yml) pass as a parameteter to all the child stacks the AWS::StackName. For example:

"Parameters": {
    "ParentStackName": { "Fn::Sub": "${AWS::StackName}" }
}

And childs to childs and so on:

"Parameters": {
    "ParentStackName": { "Fn::Sub": "${ParentStackName}" }
}

In order to be able to use that value when exporting an output

"Outputs": {
    "GraphQLAPIIdOutput": {
        "Description": "Your GraphQL API ID.",
        "Value": {
            "Fn::GetAtt": [
                "GraphQLAPI",
                "ApiId"
            ]
        },
        "Export": {
            "Name": {
                "Fn::Join": [
                    ":",
                    [
                        {
                            "Fn::Sub": "${ParentStackName}"
                        },
                        "GraphQLApiId"
                    ]
                ]
            }
        }
    }
}

This will make all the exports available in any CloudFormation template in the app.

"Fn::ImportValue": {
    "Fn::Join": [
        ":",
        [
            {
                "Fn::Sub": "${ParentStackName}"
            },
            "GraphQLApiId"
        ]
    ]
}

Additional context
For this to fully works, must be a mecanism to define a DependsOn and controll the order in which stacks are created. But it can be another issue.

True multi-tenant support and modelling with @auth

I have tried modelling my application with Amplify but, as far as I can see, the auth config using Cognito does not currently support true multi-tenancy. By true multi-tenancy I mean that users belong to one or more organisations with a particular session being bound to a single organisation at a time. Data associated with one organisation should never be visible to another organisation. Like users, organisations need to be added or removed without infrastructure changes.

Custom Resolver generator from cli.

Now, when making custom resolver, it requires some copy and paste tasks.

So, i think it maybe good idea to generate custom resolver code's from cli.

@auth Combining Owner/Groups rules for Multi-Tenant Apps

Is your feature request related to a problem? Please describe.
Ability to support multi-tenancy thru AppSync where individual items are "owned/belong" to a tenant instead of a user and we still have the ability to permission queries and mutations. Generated resolvers today effectively use isOwner || isInGroup(x for x in cognitoGroups) logic so multiple @auth rules cannot be combined to create more granular permissions.

Describe the solution you'd like
A few ideas:

  • Provide the ability to declare the combination logic before transformation so we could generate isOwner && isInGroup(x for x in cognitoGroups) when we have both rules types declared
  • Create a new @auth tenant strategy which uses the existing ownership transformation code behind the scenes but automatically changes the combination logic to isTenant && (isOwner || isInGroup(x for x in cognitoGroups))

Describe alternatives you've considered
Currently using the existing @auth owner strategy with custom ownerField and identityFIeld values, and setting the tid claim on the token with a pre-token generation Lambda function:

@auth(rules: [{allow: owner, ownerField: "tid", identityField: "claims.tid"}])

When used as the only @auth strategy, it works as intended (e.g. inserting the correct tid value during mutations; filters by tid value during queries, etc.).

But when I combine with @auth static groups strategy for permissions, the authorisation checks use OR logic instead of AND logic. I can't check for instance that a record both belongs to Tenant A (which the user belongs to) and has Permission X.

How to do bi-directional one to one relationship?

** Which Category is your question related to? **
graphql-transformer

** What AWS Services are you utilizing? **
AppSync / Dynamo

** Provide additional details e.g. code snippets **

I'm currently running into a situation where initially I had a model like follows:

type Post @model {
    id: ID!
    title: String!
}

type Response @model {
    id: ID!
    content: String!
    post: Post @connection
}

This correctly stores a 'responsePostId' for each row in the Response table.

And then I decided I wanted access to 'response' from the Post model so I modified the schema to the following:

type Post @model {
    id: ID!
    title: String!
    response: Response @connection(name: "PostResponse")
}

type Response @model {
    id: ID!
    content: String!
    post: Post @connection(name: "PostResponse")
}

What this does is create a "postResponseId" attribute for every new model in the Posts table and uses $context.source.postResponseId to get Post.response. However I don't need that index, because to get the response for a post (through the Post.response resolver) I should just use $context.source.id and as the value for "responsePostId' in the Response data source.

Note if I change the Post schema to a one to many relationship:

type Post @model {
    id: ID!
    title: String!
    responses: [Response] @connection(name: "PostResponse")
}

The Post.responses resolver correctly queries using $context.source.id rather than a $context.source.postResponseId.

I read https://aws-amplify.github.io/docs/cli/graphql?sdk=js and it talks about how to do bidirectional one-to-many relationship and a simple one-to-one relationship but there is no example in there for a bidirectional one-to-one. Am I missing something here. Maybe one-to-one bidirectional relationships are not supported because there is no way to enforce the strict 1:1ness in the database?

Amplify API - Add support for DynamoDB batch operations.

Amplify API should add support for bachOperations - i.e. generate the appropriate resolvers and schema.graphql for DynamoDB (and possibly Aurora Serverless in the future).

The support could follow the AWS own AppSync tutorial:
https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-dynamodb-batch.html

Once supported, the applications can use those batch operations to replace suboptimal multiple single item operations.

The current graphql subscriptions should not be broken (i.e. when multiple items are created/updated/deleted via batch operation, the application should still get the appropriate notifications about those items via subcription).

What could be discussed - best between AWS Amplify API and AWS AppSync teams - is whether some new "batch" subcriptions can be added to the API (so that application can get notification about multiple created/updated/deleted items via one subscription message).

@searchable indexing parent child possiblity?

** Which Category is your question related to? **
https://www.elastic.co/guide/en/elasticsearch/guide/current/indexing-parent-child.html
I guess, there is a stream from DynamoDB to ElasticSearch when a new Doc is created.
This "Stream" is fixed right? i used the AppSync @searchable from Amplify.

So, i guess i need to add the parent ID like here in the example of the page:
PUT /company/employee/1?parent=london
where london is the ID.

So this is not possible in this auto config stream?
How can i resolve this, any ideas?

** What AWS Services are you utilizing? **
AppSync / DynamoDB / ElasticSearch

** Provide additional details e.g. code snippets **

`
type Job @model @searchable {
id: ID!
name: String
location: Location
interacts: [Jobinteract] @connection(name: "interactjob")
}

type Jobinteract @model @searchable {
id: ID!
user: User! @connection(name: "interactsuser")
job: Job! @connection(name: "interactjob")
}
`

so i want all Jobs that the user has not seen yet. I guess with ElasticSearch i can do this over the parent child indexing.

RFC - @auth directive improvements

RFC - @auth directive improvements

This document will outline designs for a number of new features relating to authentication & authorization within the GraphQL Transform. The goal of these features is too fill holes and introduce new mechanisms that make protecting your valuable information easier.

Proposal 1: Replace 'queries' and 'mutations' arguments with 'operations'

Merged by aws-amplify/amplify-cli#1262

Currently an @auth directive like this:

type Task @model @auth(rules: [{allow: owner}]) {
    id: ID!
    title: String
    owner: String
}

causes these changes to the following resolvers:

  1. Query.getTask - Returns the post only if the logged in user is the post's owner.
  2. Query.listTasks - Filter items such that only owned posts are returned.
  3. Mutation.createTask - If an owner is provided via $ctx.args.input.owner and matches the identity of the logged in user, succeed. If no owner is provided, set logged in user as the owner, else fail.
  4. Mutation.updateTask - Append a conditional expression that will only update the record if the logged in user is its owner.
  5. Mutation.deleteTask - Append a conditional expression that will only delete the record if the logged in user is its owner.

In other words, the @auth directive currently protects the root level query & mutation fields that are generated for an @model type.

Problem: The 'queries' and 'mutations' arguments imply top level protection

GraphQL APIs are a graph and we need to be able to define access rules on any field, not just the top level fields.

Solution

I suggest replacing the queries and mutations arguments on the @auth directive with a single operations argument. This would be the new @auth directive definition.

directive @auth(rules: [AuthRule!]!) on OBJECT
input AuthRule {
    allow: AuthStrategy!
    ownerField: String # defaults to "owner"
    identityField: String # defaults to "cognito:username"
    groupsField: String
    groups: [String]

    # The new argument
    operations: [ModelOperation]

    # Old arguments
    queries: [ModelQuery] @deprecated(reason: "The 'queries' argument will be deprecated in the future. Please replace this argument with the 'operations' argument.")
    mutations: [ModelMutation] @deprecated(reason: "The 'mutations' argument will be deprecated in the future. Please replace this argument with the 'mutations' argument.")
}
enum AuthStrategy { owner groups }

# The new enum
enum ModelOperation { create update delete read }

# The old enums
enum ModelQuery { get list }
enum ModelMutation { create update delete }

This change generalizes the config such that it implies all read operations on that model will be protected. Not just the top level 'get' & 'list' queries. Auth rules that use the 'read' operation will be applied to top level query fields, @connection resolvers, top level fields that query custom indexes, and subscription fields. Auth rules that use the 'create', 'update', and 'delete' operation will be applied to createX, updateX, and deleteX mutations respectively. Those using queries & mutations will have the same behavior and those using operations will get the new behavior. The queries & mutations arguments will eventually be removed in a future major release.

Protect @connections by default

Merged by aws-amplify/amplify-cli#1262

Once the change from queries/mutations -> operations has been implemented, we will want to go back and implement any missing authorization logic in @connection fields by default.

For example, given this schema:

type Post @model @auth(rules: [{allow: owner}], operations: [create, update, delete, read]) {
    id: ID!
    title: String
    owner: String
}
type Blog @model {
    id: ID!
    title: String
    # This connection references type Post which has auth rules and thus should be authorized.
    posts: [Post] @connection
}

The new code would add authorization logic to the Blog.posts resolver such that only owner's of the post would be able to see the posts for a given blog. It is important to note that the new logic will restrict access such that you cannot see records that you are not supposed to see, but it will not change any index structures under the hood. You will be able to use @connection with the new custom index features to optimize the access pattern and then use @auth to protect access within that table or index.

Proposal 2: Implement @auth on @searchable search fields

Github Issues

Problem

Currently Query.searchX resolvers generated by @searchable are not protected by @auth rules.

Solution

The Elasticsearch DSL is very powerful and will allow us inject Elasticsearch query terms and implement authorization checks within Elasticsearch. This work will need to handle static & dynamic ownership and group based authorization rules. Any auth rule that includes the 'read' operation will protect the Query.searchX field.

Proposal 3: Make @auth protect subscription fields

Problem: @auth does not protect subscription fields.

type Post @model @auth(rules: [{allow: owner}]) {
    id: ID!
    title: String
    owner: String
}

Currently subscriptions are not protected automatically.

Solution

AppSync subscription queries are authorized at connect time. That means that we need to parameterize the subscription queries in such a way that any relevant authorization logic is included in the subscription query itself. In the case of ownership @auth, this means that the client must pass an owner as a query argument and the subscription resolver should verify that the logged in user and owner are the same.

For example, given this schema:

type Post @model @auth(rules: [{allow: owner}]) {
    id: ID!
    title: String
    owner: String
}

The following subscription fields would be output:

type Subscription {
    onCreatePost(owner: String): Post
    onUpdatePost(owner: String): Post
    onDeletePost(owner: String): Post
}

and when running a subscription query, the client must provide a value for the owner:

subscription OnUpdatePost($owner: String) {
    onUpdatePost(owner: $owner) {
        id
        title
    }
}

The proposed change would create a new subscription resolver for each subscription field generated by the @model. Each subscription resolver would verify the provided owner matches the logged-in identity and would fail the subscription otherwise.

There are a few limitation to this approach:

  1. There is a limit of 5 arguments per subscription field.
    • e.g. a field onCreatePost(owner: String, groups: String, otherOther: String, anotherOwner: String, anotherListOfGroups: String): Post has too many fields and is invalid. To handle this the CLI can emit a warning prompting you to customize your subscription field in the schema itself.
  2. Subscription fields are equality checked against published objects. This means that subscribing to objects with with multi-owner or multi-group auth might behave slightly differently than expected.
    • When you subscribe you will need to pass the full list of owners/groups on the item. Not just the calling identity.

As an example to point (2) above, imagine this auth rule:

type Post @model @auth(rules: [{allow: owner, ownerField: "members"}]) {
    id: ID!
    title: String
    members: [String]
}

Let's say that we want to subscribe to all new posts where I am a member.

subscription {
    onCreatePost(members: ["my-user-id"]) {
        id
        title
        members
    }
}

AppSync messages are published to subscriptions when the result of the mutation, to which the subscription field is subscribed, contains fields that equal the values provided by the subscription arguments. That means that if I were to publish a message via a mutation,

mutation {
    createPost(input: { title: "New Article", members: ["my-user-id", "my-friends-user-id"]}) {
        id
        title
        members
    }
}

the subscription started before would not be triggered because ["my-user-id", "my-friends-user-id"] is not the same as ["my-user-id"]. I bring this up for clarity but I still think this feature is useful. Single owner & group based authorization will behave as expected.

Proposal 4: Field level @auth

Merged by aws-amplify/amplify-cli#1262

Currently an @auth directive like this:

type Task @model @auth(rules: [{allow: owner}], queries: [get, list], mutations: [create, update, delete]) {
    id: ID!
    title: String
    owner: String
}

causes these changes to the following resolvers:

  1. Query.getTask - Returns the post only if the logged in user is the post's owner.
  2. Query.listTasks - Filter items such that only owned posts are returned.
  3. Mutation.createTask - If an owner is provided via $ctx.args.input.owner and matches the identity of the logged in user, succeed. If no owner is provided, set logged in user as the owner, else fail.
  4. Mutation.updateTask - Append a conditional expression that will only update the record if the logged in user is its owner.
  5. Mutation.deleteTask - Append a conditional expression that will only delete the record if the logged in user is its owner.

In other words, the @auth directive currently protects the root level query & mutation fields.

Github Issues

Problem: You cannot protect @connection resolvers

For example, look at this schema.

type Task @model {
    id: ID!
    title: String
    owner: String
    notes: [Task] @connection(name: "TaskNotes")
}
# We are trying to specify that notes should only be visible by the owner but
# we are unintentially opening access via *Task.notes*.
type Notes @model @auth(rules: [{allow: owner}]) {
    id: ID!
    title: String
    task: Task @connection(name: "TaskNotes")
    owner: String
}

Since only top level fields are protected and we do not have an @auth directive on the Task model, we are unintentionally opening access to posts via Task.notes.

Solution

We discussed having @auth rules on OBJECTs automatically protect connection fields in proposal 1, but I also suggest opening the @auth directive such that it can be placed on both FIELD_DEFINITION and OBJECT nodes. This will result in an updated definition for @auth:

directive @auth(rules: [AuthRule!]!) on OBJECT, FIELD_DEFINITION
# ...

You may then use the @auth directive on individual fields in addition to the object type definition. An @auth directive used on an @model OBJECT will augment top level queries & mutations while an @auth directive used on a FIELD_DEFINITION will protect that field's resolver by comparing the identity to the source object designated via $ctx.source.

For example, you might have:

type User @model {
    id: ID!
    username: String
    
    # Can be used to protect @connection fields.
    # This resolver will compare the $ctx.identity to the "username" attribute on the User object (via $ctx.source in the User.posts resolver).
    # In other words, we are authorizing access to posts based on information in the user object.
    posts: [Post] @connection(name: "UserPosts") @auth(rules: [{ allow: owner, ownerField: "username" }])

    # Can also be used to protect other fields
    ssn: String @auth(rules: [{ allow: owner, ownerField: "username" }])
}
# Users may create, update, delete, get, & list at the top level if they are the
# owner of this post itself.
type Post @model @auth(rules: [{ allow: owner }]) {
    id: ID!
    title: String
    author: User @connection(name: "UserPosts")
    owner: String
}

An important thing to notice is that the @auth directives compares the logged-in identity to the object exposed by $ctx.source in the resolver of that field. A side effect of (1) is that an @auth directive on a field in the top level query type doesn't have much meaning since $ctx.source will be an empty object. This is ok since @auth rules on OBJECT types handle protecting top level query/mutation fields.

Also note that the queries and mutations arguments on the @auth directive are invalid but the operations argument is allowed. The transform will validate this and fail at compile time with an error message pointing you to the mistake. E.G. this is invalid:

type User @model {
    # @auth on FIELD_DEFINITION is always protecting a field that reads data.
    # Fails with error "@auth directive used on field User.ssn cannot specify arguments 'mutations' and 'queries'"
    ssn: String @auth(rules: [{allow: owner}, mutations: [create], queries: [get]]) 

    # No one but the owner may update/delete/read their own email.
    email: String @auth(rules: [{allow: owner}, operations: [update, delete, read]) 
}

The implementation for allowing operations in field level @auth directives is a little different.

  1. create - When placed on a @model type, will verify that only the owner may pass the field in the input arg. When used on a non @model type, this does nothing.
  2. update - When placed on a @model type, will verify that only the owner may pass the field in the input arg. When used on a non @model type, this does nothing.
  3. delete - When placed on a @model type, will verify that only the owner may set the value to null. Only object level @auth directives impact delete operations so this will actually augment the update mutation and prevent passing null if you are not the owner.
  4. read - Places a resolver on the field (or updates an existing resolver in the case of @connection) that restricts access to the field. When used on a non-model type, this still protects access to the resolver.

Proposal 5: And/Or in @auth rules

Github Issues

Problem

Currently all @auth rules are joined via a top level OR operation. For example, the schema below results in rules where you can access Post objects if you are the owner OR if you are member of the "Admin" group.

type Post @model @auth(rules: [{ allow: owner }, { allow: groups, groups: ["Admin"] }]) {
    id: ID!
    title: String
    author: User @connection(name: "UserPosts")
    owner: String
}

It would be useful if you could organize these auth rules using more complex rules combined with AND and OR.

Solution

We can accomplish this by adding to the the @auth definition.

directive @auth(rules: [TopLevelAuthRule!]!) on OBJECT, FIELD_DEFINITION
input TopLevelAuthRule {
    # For backwards compat, any rule specified at the same level as an "and"/"or" will be joined via an OR.
    allow: AuthStrategy!
    ownerField: String # defaults to "owner"
    identityField: String # defaults to "cognito:username" for UserPools, "username" for IAM, "sub" for OIDC
    groupsField: String
    groups: [String]
    
    # This only exists in top level rules and specifies operations for all the rules even when combined with and/or.
    # Neseted "operations" tags are not allowed because it would confuse evaluation logic.
    operations: [ModelOperation]

    # New recursive fields on AuthRule
    and: [AuthRule]
    or: [AuthRule]   
}
input AuthRule {
    allow: AuthStrategy!
    ownerField: String # defaults to "owner"
    identityField: String # defaults to "cognito:username" for UserPools, "username" for IAM, "sub" for OIDC
    groupsField: String
    groups: [String]

    # New recursive fields on AuthRule
    and: [AuthRule]
    or: [AuthRule]
}
enum AuthStrategy { owner groups }
# Reduces get/list to read. See explanation below.
enum ModelOperation { create update delete read }

This would allow users to define advanced auth configurations like:

type User
  @model 
  @auth(rules: [{
    and: [
      { allow: owner },
      { or: [
        { allow: groups, groups: ["Admin"] },
        { allow: owner, ownerField: ["admins"] },
      }
    ],
    operations: [read]
  }]) {
  id: ID!
  admins: [String]
  owner: String
}
# Logically: ( isOwner && ( isInAdminGroup || isMemberOfAdminsField ) )

The generated resolver logic will need to be updated to evaluate the expression tree.

Proposal 6: Deny by default mode.

Github Issues

Problem: There is currently no way to specify deny access by default for Amplify APIs.

If you create an API using a schema:

type Post @model {
    id: ID!
    title: String!
}

then the generated create, update, delete, get, and list resolvers allow access to any request that includes a valid user pool token (for USER_POOL auth). This proposal will introduce a flag that specifies that all operations should be denied by default and thus all fields that do not contain an explicit auth rule will be denied. This will also change the behavior of create mutations such that the logged in user identity is never added automatically when creating objects with ownership auth.

Solution: Provide a flag that enables deny by default

By adding a DenyByDefault flag to parameters.json or transform.conf.json will allow users to specify whether fields without an @auth directive will allow access or not. When deny by default is enabled the following changes will be made.

  1. Mutation.createX resolvers will no longer auto-inject the ownership credential when the ownership credential is not provided when creating objects. Users will have to supply the ownership credential from the client and it will be validated in the mutation resolver (this happens already when you provide the ownership credential in the input).
  2. All resolvers created by a @model without an @auth directive will be denied by default.
  3. All resolvers created by @searchable on a @model without an @auth will be denied by default.
  4. All resolvers created by @connection that return a @model with an @auth AND that do not have their own field level @auth will be denied by default.

For example, with deny by default enabled

type Post @model {
    id: ID!
    title: String
    author: User @connection(name: "UserPosts")
    comments: [Comment] @connection(name: "PostComments")
}
type User @model @auth(rules: [{allow: owner}]) {
    id: ID!
    username: String!
    posts: [Post] @connection(name: "UserPosts")
}
type Comment @model {
    id: ID!
    content: String
    post: [Comment] @connection(name: "PostComments")
}

This mutation would fail:

mutation CreatePost {
    createPost(...) {
        id
        title
    }
}

This mutation would succeed:

mutation CreateUser {
    createUser(input: { username: "[email protected]" }) { # Assuming the logged in user identity is the same
        id
        title
    }
}

This top level query would succeed but Post.comments would fail.

query GetUserAndPostsAndComments {
    getUser(id: 1) { # succeeds assuming this is my user.
        posts { # succeeds because the @auth on User authorizes the child fields
            items {
                title
                comments { # fails because there is no auth rule on Post, Comment, or the Post.comments field.
                    items {
                        content
                    }
                }
            }
        }
    }
}

More details coming soon

  1. Write custom auth logic w/ pipeline functions
  2. Enable IAM auth within the API category

Request for comments

This document details a road map for authorization improvements in the Amplify CLI's API category. If there are use cases that are not covered or you have a suggestion for one of the proposals above please comment below.

CustomResources.json validation

Is your feature request related to a problem? Please describe.
Any configuration error in CustomResources.json will lead to lenghty deployment & auto-rollback.

Describe the solution you'd like
Validation checks of CustomResources.json should be performed before kicking off deployment

Describe alternatives you've considered
Double/Triple..-check of file content..

Which versions and which environment (browser, react-native, nodejs) / OS are affected by this issue? Did this work in previous versions?
N.a.

Multi Instance Support (vs Tenancy)

Is your feature request related to a problem? Please describe.
Multi-tenancy has been brought up a number of times, with the latest approach being described in aws-amplify/amplify-cli#1043

Whilst I do not discount that approach there is also a possibility of going with a multi-instance approach. Whereby the data is technically separated in different databases, with segregated access levels giving clients an extra layer of security that the data is segregated in between clients. There is also the question of indexes and efficiency. In a multi-tenant system where one client significantly over-uses the system, they are far more likely to overload specific shards if a partition key including the name/id of the tenant is required. (not sure there is a way to go around that other than by some very long/complex keys)

Whilst it is possible to create multiple amplify instances achieving just that, updating tens to hundreds of instances with amplify is not straight forward.

Describe the solution you'd like

a) Option to deploy cloud formation stack / amplify on multiple instances at the same time + some lambda-enabled CLI that could trigger the creation of a new stack.

b) alternatively the option to create new table prefixes dynamically [+ create required tables] and switching logic on the table prefix depending on the tenant id. Tenant Ids and identifies can be stored in let's call them global tables which are not re-created per tenant. In this case ideally, all tenant tables can be tagged to simplify understanding billing and growth of clients.

Describe alternatives you've considered
We've looked at multi-tenancy as is being recommended - which could work but may not give sufficient ease of mind in specific scenarios. Also considered using separate amplify instances - which we are not really happy with at the moment.

Additional context
PS. we are happy to potentially dedicate some resources to this problem if we get a confirmation on suggested approach and a couple of pointers of where to start, as not entirely sure how to go about modifying the cloud formation templates in the above manner.

[codegen] Inconsistency between codegen resolver and sample resolver

Describe the bug
Not exactly a bug, but found some inconsistency that can be easily resolved. I use codegen for resolver generation and get this for Query.listItems:

#set( $limit = $util.defaultIfNull($context.args.limit, 10) )
{
  "version": "2017-02-28",
  "operation": "Scan",
  "filter":   #if( $context.args.filter )
$util.transform.toDynamoDBFilterExpression($ctx.args.filter)
  #else
null
  #end,
  "limit": $limit,
  "nextToken":   #if( $context.args.nextToken )
"$context.args.nextToken"
  #else
null
  #end
}

However, the sample request:

{
    "version" : "2017-02-28",
    "operation" : "Scan",
    "limit": $util.defaultIfNull(${ctx.args.limit}, 20),
    "nextToken": $util.toJson($util.defaultIfNullOrBlank($ctx.args.nextToken, null))
}

The difference is obvious. Seems the samle one looks more concise.

Expected behavior
The codegen request looks similar as the same request.

GraphQL Api - Custom Parameters from parameters.json

Is your feature request related to a problem? Please describe.
Unless I missed something, there is no way to pass variables into the CloudFormation template of graphql custom resources. Parameters get replaced during build process.

Describe the solution you'd like
Be able to add custom parameters to parameters.json to use in the CustomResources.json stack template.

Describe alternatives you've considered
Hardcoding the values in the CloudFormation template I guess.

What I've been trying so far

backend -> api -> graphql -> stacks -> CustomResources.json

"Parameters": {
  ...
  "S3DeploymentBucket": {
    "Type": "String",
    "Description": "The S3 bucket containing all deployment assets for the project."
  },
  "S3DeploymentRootKey": {
    "Type": "String",
    "Description": "An S3 key relative to the S3DeploymentBucket that points to the root\nof the deployment directory."
  },
  "authRoleName": { // <------------ Mine
    "Type": "String",
    "Description": "My Custom Parameter"
  }
},

backend -> api -> graphql -> parameters.json

{
    "AppSyncApiName": "graphql",
    "DynamoDBBillingMode": "PAY_PER_REQUEST",
    "AuthCognitoUserPoolId": {
        "Fn::GetAtt": [
            "authcognito6a5cad5b",
            "Outputs.UserPoolId"
        ]
    },
    "authRoleName": {
        "Ref": "AuthRoleName"
    }
}

After build, parameters.json inside build folder:
backend -> api -> graphql -> build -> parameters.json

{
    "AppSyncApiName": "graphql",
    "DynamoDBBillingMode": "PAY_PER_REQUEST",
    "AuthCognitoUserPoolId": {
        "Fn::GetAtt": [
            "authcognito6a5cad5b",
            "Outputs.UserPoolId"
        ]
    },
    "S3DeploymentBucket": "graphql-deployment",  //  <-------- Added by amplify
    "S3DeploymentRootKey": "amplify-appsync-files/xxxxxxxx"  //  <-------- Added by amplify
}
> amplify push
...
UPDATE_FAILED      CustomResourcesjson AWS::CloudFormation::Stack Sun Apr 21 2019 15:44:38 GMT+0100 (WEST) Parameters: [authRoleName] must have values
⠧ Updating resources in the cloud. This may take a few minutes...
...

add total page counts for GraphQL paginated queries

** Which Category is your question related to? **
Query a list from Dynamo using graphql api generated in amplify.

** What AWS Services are you utilizing? **
dynamo, appsync

** Provide additional details e.g. code snippets **
is there a "previousToken" similar to "nextToken" when get data through app sync?
If not, what's the best practice to go to previous page? store tokens locally?
Finally, is there a way to go skip pages, for example go from page 1 to page 3?

Add 'Enable Logs' option for AppSync API

Is your feature request related to a problem? Please describe.

Would like an option for the CLI to enable logging for the AppSync GraphQL api it generates, i.e. turn this on:

AWS AppSync Console 2019-04-02 11-13-39

Describe the solution you'd like
The CLI to do this and add it to the cloudformation template.

Describe alternatives you've considered
I believe I'll have to modify the cloudformation template myself.

GraphQL Transformer Region Support

I have tried modelling my application with Amplify but it doesn't currently have the multi-region support that I require.

I am building a complex service spread across multiple regions. Some of the data used by the service would most naturally fit in Dynamo global tables, other data would be region specific. It would be great to have Amplify support the tagging of @model directives to control the distribution of data and have Amplify support multi-region deployment.

Given the scope of the problem I don't want to be too specific and I can't really give a complete feature list. The small list below describes the features I am looking for right now:

  • Need to abstract away the location of data - some data will be global, some will be local.
    ** Global data would be shared across all regions to which the service is deployed, changes need to be propagated to all the places that replicate the data.
    ** Local data would be held in a single region although the types of data is identical across all regions (same schema).
  • Top level deployment should list the regions to include in deployment
    ** Adding and removing deployed regions to a live system should be supported

amplify add api requires "Provide a custom type name"

Describe the bug
When using the amplify add api command it asks for Provide a custom type name. This is not always necessary. I immediately followed up with amplify api add-graphql-datasource to add a aurora serverless data source. This results in unnecessary Dynamo db tables being created and extra schema added.

Expected behavior
Option to skip Dynamo table creation.

in single schema a second query and mutation section is ignored by schema compiler

Describe the bug
in single schema a second query and mutation section is ignored by schema compiler.

To Reproduce

create schema.graphql:

type A@model(queries: null, mutations: null) { id: ID!, name: String }
type B@model(queries: null, mutations: null) { id: ID!, name: String }
type Mutation {
createA(id: ID!): A
}
type Query {
getA(id: ID!): A
}
type Mutation {
createB(id: ID!): B
}

===
Expected behavior
Since this isn't valid the compiler should throw an error, instead it just ignores the extra entries. It would be better if it merged them, but I think the correct behavior is to throw an error so the programmer can correct the file.

Thank You.

Generated mutations don't include array of objects field

** Which Category is your question related to? **
amplify push

** What AWS Services are you utilizing? **
amplify, aws appsync, graphql

** Provide additional details e.g. code snippets **

I'm using amplify push to generate my GraphQL queries and mutations. Here's my schema:

type Product @model @auth(rules: [{ allow: owner }]) {
  id: ID!
  name: String!
  price: Float!
  vendor: Vendor @connection(name: "ProductVendor")
  category: String
  units: String
  defaultqty: Int
  maxqty: Int
}

type Order @model @auth(rules: [{ allow: owner }]) {
  id: ID!
  name: String
  products: [Product] @connection
}

type Vendor @model @auth(rules: [{ allow: owner }]) {
  id: ID!
  name: String!
  products: [Product] @connection(name: "ProductVendor")
}

amplify push successfully generates queries and mutations, but the createOrder mutation accepts a createOrderInput type, which only has a name field, and no products field, so I'm confused about how to create an Order that has a list of Products.

How should I define my schema so that I properly generate mutations that allow me to create Orders that have an array of Product items? I've seen recommendations to replace products: [Product] with products: [ID], or product: [AWSJSON], but I'd like to be able to input the actual Product type, and not just Product IDs or a Json string representing a list of Products.

Any advice appreciated!

How to create a custom directive for multi-tenancy with a pipeline resolver?

Which Category is your question related to?
Creating a custom directive / transformer

What AWS Services are you utilizing?
Amplify + AppSync

Provide additional details e.g. code snippets
I'm building a multi-tenant app and I'm generally avoiding Cognito Groups for all its limitations. I'd like to create a custom directive @tenant which checks to see if a user has a Membership with the tenantId listed on the resource. Effectively, any @model with an @tenant would have to have a tenantId property set. I think this is a good fit for Pipeline Resolvers.

@tenant would effectively transform the @model resolver into a Pipeline Resolver that would first check that the user belonged to the Tenant listed on the Post and error out before resolving any CRUDL request.

I'm not sure if this is the best approach to this. I know some changes are in the works for the @auth directive. But even if that rolls out soon and answers all my concerns about multi-tenancy auth...it would be nice to understand Custom Resolvers / Transformers better. The current documentation is minimal and sort of just goes over pseudo code for how to implement one.

Thanks for any insight you can provide!

Simplified schema details:

enum MemberType {
  NORMAL
  ADMIN
}

type Membership @model {
  id: ID!
  username: String!
  displayName: String!
  type: MemberType!
  tenant: Tenant! @connection(name: "TenantMemberships")
}

type Tenant @model {
  id: ID!
  name: String!
  memberships: [Membership!]! @connection(name: "TenantMemberships")
}

type Post @model @tenant {
  id: ID!
  tenantId: String!
  title: String!
  content: String!
  author: String!
}

GraphQL relationship with RDS Aurora

** Which Category is your question related to? **
GraphQL schema

** What AWS Services are you utilizing? **
Appsync

** Provide additional details e.g. code snippets **
What I wanna do is link the type into another type after generate the schema from RDS aurora serverless
Is there any way to do that or I need to create the resolver file by myself ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.