Giter VIP home page Giter VIP logo

rest-layer's Introduction

REST Layer

REST APIs made easy.

godoc license build Go Report Card

REST Layer is an API framework heavily inspired by the excellent Python Eve. It helps you create a comprehensive, customizable, and secure REST (graph) API on top of pluggable backend storages with no boiler plate code so you can focus on your business logic.

Implemented as a net/http handler, it plays well with standard middleware like CORS. It is also context aware. This allows deadline management to be supported down to the storage and permit an easy extensibility by passing custom data between layers of the framework.

REST Layer is an opinionated framework. Unlike many API frameworks, you don't directly control the routing and you don't have to write handlers. You just define resources and sub-resources with a schema, the framework automatically figures out what routes need to be generated behind the scene. You don't have to take care of the HTTP headers and response, JSON encoding, etc. either. REST layer handles HTTP conditional requests, caching, integrity checking for you.

A powerful and extensible validation engine make sure that data comes pre-validated to your custom storage handlers. Generic resource handlers for MongoDB, ElasticSearch and other databases are also available so you have few to no code to write to get up and running.

Moreover, REST Layer let you create a graph API by linking resources between them. Thanks to its advanced field selection syntax or GraphQL support, you can gather resources and their dependencies in a single request, saving you from costly network round-trips.

The REST Layer framework is composed of several sub-packages:

package layout

Package Coverage Description
rest Coverage A net/http handler to expose a REST-ful API.
graphql Coverage A net/http handler to expose your API using the GraphQL protocol.
schema Coverage A validation framework for the API resources.
resource Coverage Defines resources, manages the resource graph and manages the interface with resource storage handler.

Documentation

Breaking Changes

Until we reach a stable v1, there will be occasional breaking changes to the rest-layer APIs. Breaking changes will however not arrive at patch releases.

Breaking changes since v0.2.0

No breaking changes since v0.2.0.

Breaking changes prior to v0.2.0

Below is an incomplete list of breaking changes included in v0.2.0:

  • PR #151: ValuesValidator FieldValidator attribute in schema.Dict struct replaced by Values Field.
  • PR #179: ValuesValidator FieldValidator attribute in schema.Array struct replaced by Values Field.
  • PR #204:
    • Storage drivers need to accept pointer to Expression implementer in query.Predicate.
    • filter parameters in sub-query will be validated for type match.
    • filter parameters will be validated for type match only, instead of type & constrains.
  • PR #228: Reference projection fields will be validated against referenced resource schema.
  • PR #230: Connection projection fields will be validated against connected resource schema.
  • PR #241: Always call OnUpdate field hook on HTTP PUT for existing documents. Deleting a field with Default value set, will always be reset to its default value.

Features

  • Automatic handling of REST resource operations
  • Full test coverage
  • Plays well with other net/http middleware
  • Pluggable resources storage
  • Pluggable response sender
  • GraphQL query support
  • GraphQL mutation support
  • Swagger Documentation
  • JSONSchema Output (partial)
  • Testing framework
  • Sub resources
  • Cascading deletes on sub resources
  • Filtering
  • Sorting
  • Pagination
  • Aliasing
  • Custom business logic
  • Event hooks
  • Field hooks
  • Extensible data validation and transformation
  • Conditional requests (Last-Modified / Etag)
  • Data integrity and concurrency control (If-Match)
  • Timeout and request cancellation through context
  • Logging
  • Multi-GET
  • Bulk inserts
  • Default and nullable values
  • Per resource cache control
  • Customizable authentication / authorization
  • Projections
  • Embedded resource serialization
  • Sub-request concurrency control
  • Custom ID field
  • Data versioning
  • Per resource circuit breaker using Hystrix
  • JSON-Patch support

Extensions

As REST Layer is a simple net/http handler. You can use standard middleware to extend its functionalities:

Main Storage Handlers

Alternate Storage Handlers

Usage

package main

import (
	"log"
	"net/http"

	"github.com/rs/rest-layer/resource/testing/mem"
	"github.com/rs/rest-layer/resource"
	"github.com/rs/rest-layer/rest"
	"github.com/rs/rest-layer/schema/query"
	"github.com/rs/rest-layer/schema"
)

var (
	// Define a user resource schema
	user = schema.Schema{
		Description: `The user object`,
		Fields: schema.Fields{
			"id": {
				Required: true,
				// When a field is read-only, only default values or hooks can
				// set their value. The client can't change it.
				ReadOnly: true,
				// This is a field hook called when a new user is created.
				// The schema.NewID hook is a provided hook to generate a
				// unique id when no value is provided.
				OnInit: schema.NewID,
				// The Filterable and Sortable allows usage of filter and sort
				// on this field in requests.
				Filterable: true,
				Sortable:   true,
				Validator: &schema.String{
					Regexp: "^[0-9a-v]{20}$",
				},
			},
			"created": {
				Required:   true,
				ReadOnly:   true,
				Filterable: true,
				Sortable:   true,
				OnInit:     schema.Now,
				Validator:  &schema.Time{},
			},
			"updated": {
				Required:   true,
				ReadOnly:   true,
				Filterable: true,
				Sortable:   true,
				OnInit:     schema.Now,
				// The OnUpdate hook is called when the item is edited. Here we use
				// provided Now hook which returns the current time.
				OnUpdate:  schema.Now,
				Validator: &schema.Time{},
			},
			// Define a name field as required with a string validator
			"name": {
				Required:   true,
				Filterable: true,
				Validator: &schema.String{
					MaxLen: 150,
				},
			},
		},
	}

	// Define a post resource schema
	post = schema.Schema{
		Description: `Represents a blog post`,
		Fields: schema.Fields{
			// schema.*Field are shortcuts for common fields
			// (identical to users' same fields)
			"id":      schema.IDField,
			"created": schema.CreatedField,
			"updated": schema.UpdatedField,
			// Define a user field which references the user owning the post.
			// See bellow, the content of this field is enforced by the fact
			// that posts is a sub-resource of users.
			"user": {
				Required:   true,
				Filterable: true,
				Validator: &schema.Reference{
					Path: "users",
				},
			},
			"published": {
				Required: true,
				Filterable: true,
				Default: false,
				Validator: &schema.Bool{},
			},
			"title": {
				Required: true,
				Validator: &schema.String{
					MaxLen: 150,
				},
			},
			"body": {
				// Dependency defines that body field can't be changed if
				// the published field is not "false".
				Dependency: query.MustParsePredicate(`{"published": false}`),
				Validator: &schema.String{
					MaxLen: 100000,
				},
			},
		},
	}
)

func main() {
	// Create a REST API resource index
	index := resource.NewIndex()

	// Add a resource on /users[/:user_id]
	users := index.Bind("users", user, mem.NewHandler(), resource.Conf{
		// We allow all REST methods
		// (rest.ReadWrite is a shortcut for []resource.Mode{resource.Create,
	    //  resource.Read, resource.Update, resource.Delete, resource,List})
		AllowedModes: resource.ReadWrite,
	})

	// Bind a sub resource on /users/:user_id/posts[/:post_id]
	// and reference the user on each post using the "user" field of the posts resource.
	users.Bind("posts", "user", post, mem.NewHandler(), resource.Conf{
		// Posts can only be read, created and deleted, not updated
		AllowedModes: []resource.Mode{resource.Read, resource.List,
			 resource.Create, resource.Delete},
	})

	// Create API HTTP handler for the resource graph
	api, err := rest.NewHandler(index)
	if err != nil {
		log.Fatalf("Invalid API configuration: %s", err)
	}

	// Bind the API under /api/ path
	http.Handle("/api/", http.StripPrefix("/api/", api))

	// Serve it
	log.Print("Serving API on http://localhost:8080")
	if err := http.ListenAndServe(":8080", nil); err != nil {
		log.Fatal(err)
	}
}

Just run this code (or use the provided examples/demo):

$ go run examples/demo/main.go
2015/07/27 20:54:55 Serving API on http://localhost:8080

Using HTTPie, you can now play with your API.

First create a user:

$ http POST :8080/api/users name="John Doe"
HTTP/1.1 201 Created
Content-Length: 155
Content-Location: /api/users/ar6ejgmkj5lfl98r67p0
Content-Type: application/json
Date: Mon, 27 Jul 2015 19:10:20 GMT
Etag: "1e18e148e1ff3ecdaae5ec03ac74e0e4"
Last-Modified: Mon, 27 Jul 2015 19:10:20 GMT
Vary: Origin

{
    "id": "ar6ejgmkj5lfl98r67p0",
    "created": "2015-07-27T21:10:20.671003126+02:00",
    "updated": "2015-07-27T21:10:20.671003989+02:00",
    "name": "John Doe",
}

As you can see, the id, created and updated fields have been automatically generated by our OnInit field hooks.

Also notice the Etag and Last-Modified headers. Those guys allow data integrity and concurrency control down to the storage layer through the use of the If-Match and If-Unmodified-Since headers. They can also serve for conditional requests using If-None-Match and If-Modified-Since headers.

Here is an example of conditional request:

$ http :8080/api/users/ar6ejgmkj5lfl98r67p0 \
  If-Modified-Since:"Mon, 27 Jul 2015 19:10:20 GMT"
HTTP/1.1 304 Not Modified
Date: Mon, 27 Jul 2015 19:17:11 GMT
Vary: Origin

And here is a data integrity request following the RFC-5789 recommendations:

$ http PATCH :8080/api/users/ar6ejgmkj5lfl98r67p0 \
  name="Someone Else" If-Match:invalid-etag
HTTP/1.1 412 Precondition Failed
Content-Length: 58
Content-Type: application/json
Date: Mon, 27 Jul 2015 19:33:27 GMT
Vary: Origin

{
    "code": 412,
    "fields": null,
    "message": "Precondition Failed"
}

Retry with the valid etag:

$ http PATCH :8080/api/users/ar6ejgmkj5lfl98r67p0 \
  name="Someone Else" If-Match:'"1e18e148e1ff3ecdaae5ec03ac74e0e4"'

HTTP/1.1 200 OK
Content-Length: 159
Content-Type: application/json
Date: Mon, 27 Jul 2015 19:36:19 GMT
Etag: "7bb7a71b0f66197aa07c4c8fc9564616"
Last-Modified: Mon, 27 Jul 2015 19:36:19 GMT
Vary: Origin

{
    "created": "2015-07-27T21:33:09.168492448+02:00",
    "id": "ar6ejmukj5lflde9q8bg",
    "name": "Someone Else",
    "updated": "2015-07-27T21:36:19.904545093+02:00"
}

Note that even if you don't use conditional request, the Etag is always used by the storage handler to manage concurrency control between requests.

Another cool thing is sub-resources. We've set our posts resource as a child of the users resource. This way we can handle ownership very easily as routes are constructed as /users/:user_id/posts.

Lets create a post:

$ http POST :8080/api/users/ar6ejgmkj5lfl98r67p0/posts \
  title="My first post"
HTTP/1.1 200 OK
Content-Length: 212
Content-Type: application/json
Date: Mon, 27 Jul 2015 19:46:55 GMT
Etag: "307ae92df6c3dd54847bfc7d72422e07"
Last-Modified: Mon, 27 Jul 2015 19:46:55 GMT
Vary: Origin

{
    "id": "ar6ejs6kj5lflgc28es0",
    "created": "2015-07-27T21:46:55.355857401+02:00",
    "updated": "2015-07-27T21:46:55.355857989+02:00",
    "title": "My first post",
    "user": "ar6ejgmkj5lfl98r67p0"
}

Notice how the user field has been set with the user id provided in the route, that's pretty cool, huh?

We defined that we can create posts but we can't modify them, lets verify that:

$ http PATCH :8080/api/users/821dā€¦/posts/ar6ejs6kj5lflgc28es0 \
  private=true
HTTP/1.1 405 Method Not Allowed
Content-Length: 53
Content-Type: application/json
Date: Mon, 27 Jul 2015 19:50:33 GMT
Vary: Origin

{
    "code": 405,
    "fields": null,
    "message": "Invalid method"
}

Let's list posts for that user now:

$ http :8080/api/users/ar6ejgmkj5lfl98r67p0/posts
HTTP/1.1 200 OK
Content-Length: 257
Content-Type: application/json
Date: Mon, 27 Jul 2015 19:51:46 GMT
Vary: Origin
X-Total: 1

[
    {
        "id": "ar6ejs6kj5lflgc28es0",
        "_etag": "307ae92df6c3dd54847bfc7d72422e07",
        "created": "2015-07-27T21:46:55.355857401+02:00",
        "updated": "2015-07-27T21:46:55.355857989+02:00",
        "title": "My first post",
        "user": "ar6ejgmkj5lfl98r67p0"
    }
]

Notice the added _etag field. This is to let you get etags of multiple items without having to GET each one of them through individual requests.

Now, let's get user's information for each posts in a single request:

$ http :8080/api/users/ar6ejgmkj5lfl98r67p0/posts fields=='id,title,user{id,name}'
HTTP/1.1 200 OK
Content-Length: 257
Content-Type: application/json
Date: Mon, 27 Jul 2015 19:51:46 GMT
Vary: Origin
X-Total: 1

[
    {
        "id": "ar6ejs6kj5lflgc28es0",
        "_etag": "307ae92df6c3dd54847bfc7d72422e07",
        "created": "2015-07-27T21:46:55.355857401+02:00",
        "updated": "2015-07-27T21:46:55.355857989+02:00",
        "title": "My first post",
        "user": {
            "id": "ar6ejgmkj5lfl98r67p0",
            "name": "John Doe"
        }
    }
]

Notice how we selected which fields we wanted in the result using the field selection query format. Thanks to sub-request support, the user name is included with each post with no additional HTTP request.

We can go even further and embed a sub-request list responses. Let's say we want a list of users with the last two posts:

$ http GET :8080/api/users fields='id,name,posts(limit:2){id,title}'
HTTP/1.1 201 Created
Content-Length: 155
Content-Location: /api/users/ar6ejgmkj5lfl98r67p0
Content-Type: application/json
Date: Mon, 27 Jul 2015 19:10:20 GMT
Etag: "1e18e148e1ff3ecdaae5ec03ac74e0e4"
Last-Modified: Mon, 27 Jul 2015 19:10:20 GMT
Vary: Origin

[
    {
        "id": "ar6ejgmkj5lfl98r67p0",
        "name": "John Doe",
        "posts": [
            {
                "id": "ar6ejs6kj5lflgc28es0",
                "title": "My first post"
            },
            {
                "id": "ar6ek26kj5lfljgh84qg",
                "title": "My second post"
            }
        ]
    }
]

Sub-requests are executed concurrently whenever possible to ensure the fastest response time.

Resource Configuration

For REST Layer to be able to expose resources, you have to first define what fields the resource contains and where to bind it in the REST API URL namespace.

Schema

Resource field configuration is performed through the schema package. A schema is a struct describing a resource. A schema is composed of metadata about the resource and a description of the allowed fields through a map of field name pointing to field definition.

Sample resource schema:

foo = schema.Schema{
	Description: "A foo object",
	Fields: schema.Fields{
		"field_name": {
			Required: true,
			Filterable: true,
			Validator: &schema.String{
				MaxLen: 150,
			},
		},
	},
}

Schema fields:

Field Description
Description The description of the resource. This is used for API documentation.
Fields A map of field name to field definition.

Field Definition

The field definitions contains the following properties:

Field Description
Required If true, the field must be provided when the resource is created and can't be set to null. The client may be able to omit a required field if a Default or a hook sets its content.
ReadOnly If true, the field can not be set by the client, only a Default or a hook can alter its value. You may specify a value for a read-only field in your mutation request if the value is equal to the old value, REST Layer won't complain about it. This lets your client PUT the same document it got with GET without having to take care of removing the read-only fields.
Hidden Hidden allows writes but hides the field's content from the client. When this field is enabled, PUTing the document without the field would not remove the field but use the previous document's value if any.
Default The value to be set when resource is created and the client didn't provide a value for the field. The content of this variable must still pass validation.
OnInit A function to be executed when the resource is created. The function gets the current value of the field (after Default has been set if any) and returns the new value to be set.
OnUpdate A function to be executed when the resource is updated. The function gets the current (updated) value of the field and returns the new value to be set.
Params Params defines the list of parameters allowed for this field. See Field Parameters section for some examples.
Handler Handler defines a function able to change the field's value depending on the passed parameters. See Field Parameters section for some examples.
Validator A schema.FieldValidator to validate the content of the field.
Dependency A query using filter format created with query.MustParsePredicate(`{"field": "value"}`). If the query doesn't match the document, the field generates a dependency error.
Filterable If true, the field can be used with the filter parameter. You may want to ensure the backend database has this field indexed when enabled. Some storage handlers may not support all the operators of the filter parameter, see their documentation for more information.
Sortable If true, the field can be used with the sort parameter. You may want to ensure the backend database has this field indexed when enabled.
Schema An optional sub schema to validate hierarchical documents.

REST Layer comes with a set of validators. You can add your own by implementing the schema.FieldValidator interface. Here is the list of provided validators:

Validator Description
schema.String Ensures the field is a string
schema.Integer Ensures the field is an integer
schema.Float Ensures the field is a float
schema.Bool Ensures the field is a Boolean
schema.Array Ensures the field is an array
schema.Dict Ensures the field is a dict
schema.Object Ensures the field is an object validating against a sub-schema
schema.Time Ensures the field is a datetime
schema.URL Ensures the field is a valid URL
schema.IP Ensures the field is a valid IPv4 or IPv6
schema.Password Ensures the field is a valid password and bcrypt it
schema.Reference Ensures the field contains a reference to another existing API item
schema.AnyOf Ensures that at least one sub-validator is valid
schema.AllOf Ensures that at least all sub-validators are valid

Some common hook handler to be used with OnInit and OnUpdate are also provided:

Hook Description
schema.Now Returns the current time ignoring the input (current) value.
schema.NewID Returns a unique identifier using xid if input value is nil.

Some common field configuration are also provided as variables:

Field Config Description
schema.IDField A required, read-only field with schema.NewID set as OnInit hook and a schema.String validator matching xid format.
schema.CreatedField A required, read-only field with schema.Now set on OnInit hook with a schema.Time validator.
schema.UpdatedField A required, read-only field with schema.Now set on OnInit and OnUpdate hooks with a schema.Time validator.
schema.PasswordField A hidden, required field with a schema.Password validator.

Here is an example of schema declaration:

// Define a post resource schema
post = schema.Schema{
	Fields: schema.Fields{
		// schema.*Field are shortcuts for common fields (identical to users' same fields)
		"id":      schema.IDField,
		"created": schema.CreatedField,
		"updated": schema.UpdatedField,
		// Define a user field which references the user owning the post.
		// See bellow, the content of this field is enforced by the fact
		// that posts is a sub-resource of users.
		"user": {
			Required: true,
			Filterable: true,
			Validator: &schema.Reference{
				Path: "users",
			},
		},
		// Sub-documents are handled via a sub-schema
		"meta": {
			Schema: &schema.Schema{
				Fields: schema.Fields{
					"title": {
						Required: true,
						Validator: &schema.String{
							MaxLen: 150,
						},
					},
					"body": {
						Validator: &schema.String{
							MaxLen: 100000,
						},
					},
				},
			},
		},
	},
}

Binding

Now you just need to bind this schema at a specific endpoint on the resource.Index object:

index := resource.NewIndex()
posts := index.Bind("posts", post, mem.NewHandler(), resource.DefaultConf)

This tells the resource.Index to bind the post schema at the posts endpoint. The resource collection URL is then /posts and item URLs are /posts/<post_id>.

The resource.DefaultConf variable is a pre-defined resource.Conf type with sensible defaults. You can customize the resource behavior using a custom configuration.

The resource.Conf type has the following customizable properties:

Property Description
AllowedModes A list of resource.Mode allowed for the resource.
PaginationDefaultLimit If set, pagination is enabled for list requests by default with the number of item per page as defined here. Note that the default ony applies to list (GET) requests, i.e. it does not apply for clear (DELETE) requests.
ForceTotal Control the behavior of the computation of X-Total header and the total query-string parameter. See resource.ForceTotalMode for available options.

Modes

REST Layer handles mapping of HTTP methods to your resource URLs automatically. With REST, there is two kind of resource URL paths: collection and item URLs. Collection URLs (/<resource>) are pointing to the collection of items, while item URL (/<resource>/<item_id>) points to a specific item in that collection. HTTP methods are used to perform CRUDL operations on those resources.

You can easily dis/allow an operation on a per resource basis using resource.Conf's AllowedModes property. The use of modes instead of HTTP methods in the configuration adds a layer of abstraction necessary to handle specific cases like PUT HTTP method performing a create if the specified item does not exist or a replace if it does. This gives you precise control of what you want to allow or not.

Modes are passed as configuration to resources as follow:

users := index.Bind("users", user, mem.NewHandler(), resource.Conf{
	AllowedModes: []resource.Mode{resource.Read, resource.List, resource.Create, resource.Delete},
})

The following table shows how REST layer maps CRUDL operations to HTTP methods and modes:

Mode HTTP Method Context Description
Read GET Item Get an individual item by its ID.
List GET Collection List/find items using filters and sorts.
Create POST Collection Create an item letting the system generate its ID.
Create PUT Item Create an item by choosing its ID.
Update PATCH Item Partially modify the item following RFC-5789, RFC-6902.
Replace PUT Item Replace the item by a new on.
Delete DELETE Item Delete the item by its ID.
Clear DELETE Collection Delete all items from the collection matching the context and/or filters.

Note on GraphQL support and modes: current implementation of GraphQL doesn't support mutation. Thus only resources with Read and List modes will be exposed with GraphQL. Support for other modes will be added in the future.

Hooks

Hooks are piece of code you can attach before or after an operation is performed on a resource. A hook is a Go type implementing one of the event handler interface below, and attached to a resource via the Resource.Use method.

Hook Interface Description
FindEventHandler Defines a function called when the resource is listed with or without a query. Note that hook is called for both resource and item fetch as well a prior to updates and deletes.
FoundEventHandler Defines a function called with the result of a find on resource.
GetEventHandler Defines a function called when a get is performed on an item of the resource. Note: when multi-get is performed this hook is called for each items id individually.
GotEventHandler Defines a function called with the result of a get on a resource.
InsertEventHandler Defines a function called before an item is inserted.
InsertedEventHandler Defines a function called after an item has been inserted.
UpdateEventHandler Defines a function called before an item is updated.
UpdatedEventHandler Defines a function called after an item has been updated.
DeleteEventHandler Defines a function called before an item is deleted.
DeletedEventHandler Defines a function called after an item has been deleted.
ClearEventHandler Defines a function called before a resource is cleared.
ClearedEventHandler Defines a function called after a resource has been cleared.

Note that these are resource level hooks, and do not correspond one-to-one to rest or graphql operation. For the rest package in particular, note that a HTTP request to GET an item by ID, will result in a Find and not a Get call which will triggering the OnFind and OnFound hooks to be called, not OnGet and OnGot. Similarly, a PATCH or PUT request will call Find before it calls Update, which will trigger the same hooks. If your hooks logic require knowing which rest-level operation is performed see rest.RouteFromContext

All hooks functions get a context.Context as first argument. If a network call must be performed from the hook, the context's deadline must be respected. If a hook returns an error, the whole request is aborted with that error. You can also use the context to pass data to your hooks from a middleware executed before REST Layer. This can be used to manage authentication for instance. See examples/auth to see an example.

Hooks that get passed both an an error and/or an item, such as GotEventHandler, UpdatedEventHandler, DeletedEventHandler should insert guards to handle the error being set and/or the item not being set; both can be true in some cases. It's also allowed to set items or errors to nil, which is why double pointers are often used.

func (hook Hook) OnGot(ctx context.Context, item **resource.Item, err *error) {
	// Guard.
	if *err != nil || *item == nil {
		return
	}
	// ...
}
func (hook Hook) OnGot(ctx context.Context, item **resource.Item, err *error) {
	// Overriding an error response.
	if *err != nil || *item == nil {
		(*err) = nil
		(*item) = fallbackItem()
	}
	// ...
}

Sub Resources

Sub resources can be used to express a one-to-may parent-child relationship between two resources. A sub-resource is automatically filtered by its parent on the field specified as second argument of the Bind method.

To create a sub-resource, you bind you resource on the object returned by the binding of the parent resource. For instance, here we bind a comments resource to a posts resource:

posts := index.Bind("posts", post, mem.NewHandler(), resource.DefaultConf)
// Bind comment as sub-resource of the posts resource
posts.Bind("comments", "post", comment, mem.NewHandler(), resource.DefaultConf)

The second argument post defines the field in the comments resource that refers to the parent. This field must be present in the resource and the backend storage must support filtering on it. As a result, we get a new hierarchical route as follow:

/posts/:post_id/comments[/:comment_id]

When performing a GET on /posts/:post_id/comments, it is like adding the filter {"post":"<post_id>"} to the request to comments resource.

Additionally, thanks to REST Layer's embedding, this relationship can be embedded in the parent object as a sub-query:

/posts?fields=id,title,comments(limit=5,sort=-updated){id,user{id,name},message}

Here we would get all post with their respective 5 last comments embedded in the comments field of each post object with the user commenting to post embedded in each comment's sub-document:

[
    {
        "id": "abc",
        "comments": [
            {
                "id": "def",
                "user": {
                    "id": "ghi",
                    "name": "John Doe",
                },
                "message": "Last comment"
            },
        ]
    },
]

See embedding for more information.

Dependency

Fields can depend on other fields in order to be changed. To configure a dependency, set a filter on the Dependency property of the field using the query.MustParsePredicate() method.

In this example, the body field can't be changed if the published field is not set to true:

post = schema.Schema{
	Fields: schema.Fields{
		"published": schema.Field{
			Validator:  &schema.Bool{},
		},
		"body": {
			Dependency: query.MustParsePredicate(`{"published": true}`),
			Validator:  &schema.String{},
		},
	},
}

HTTP Request Headers

Prefer

Currently supported values are:

  • return=minimal: When a request is successfully (HTTP Response Status of 200 or 201), response body is not returned. For Response Status of 200 OK, status becomes 204 No Content. Can be used for e.g PUT, POST and PATCH methods, where returned body will be known by the client.
  • return=no-content: same as return=minimal.
$ echo '[{"op": "add", "path":"/foo", "value": "bar"}]' | http PATCH :8080/users/ar6ej4mkj5lfl688d8lg If-Match:'"1234567890123456789012345678901234567890"' \
Content-Type: application/json-patch+json \
Prefer: return=minimal
HTTP/1.1 204 No Content

Content-Type

The Content-Type of the request body. Most HTTP methods only support "aplication/json" by default, but PUT requests also allow "application/json-patch+json".

HTTP Request Methods

Following HTTP Methods are currently supported by rest-layer.

OPTIONS

Used to tell the client which HTTP Methods are supported for any given path.

HEAD

The same as GET, except it includes only headers in the response.

GET

Used to retrieve a (projected[#field-selection]) resource document by specifying it's ID in the path, or to retrieve a paginated view of documents matching a query.

POST

Used to create new resource document when the ID can be generated by the server. Field default values are set for omitted fields, and OnCreate field hooks are issued.

PUT

Used to create or update a single resource document by specifying it's ID in the path. Field default values are set for omitted fields. If the document did not previously exist OnCreate field hooks are issued, otherwise OnUpdate field hooks are issued.

If-Match concurrency protection could be used if relevant.

PATCH

Used to create or patch a single resource document by specifying it's ID in the path. OnUpdate field hooks are issued.

REST Layer supports two PATCH protocols, that can be specified via the Content-Type header.

  • Simple filed replacement RFC-5789 - this protocol will update only supplied top level fields, and will leave other fields in the document intact. This means that this protocol can't delete fields. Using this protocol is specified with Content-Type: application/json HTTP Request header.

  • JSON-Patch/RFC-6902 - When patching deeply nested documents, it is more convenient to use protocol designed especially for this. Using this protocol is specified with Content-Type: application/json-patch+json HTTP Request header.

If-Match concurrency protection could be used if relevant.

Example JSON Patch Request where we utilize concurrency control ask for the response body to be omitted:

$ echo '[{"op": "add", "path":"/foo", "value": "bar"}]' | http PATCH :8080/users/ar6ej4mkj5lfl688d8lg If-Match:'"1234567890123456789012345678901234567890"' \
Content-Type: application/json-patch+json \
Prefer: return=minimal
HTTP/1.1 204 No Content

DELETE

Used to delete single resource document given its ID, or multiple documents matching a query.

Querying

When supplying query parameters be sure to honor URL encoding scheme. If you need to include + sign, use %2B, etc.

Filtering

To filter resources, you use the filter query-string parameter. The format of the parameter is inspired by the MongoDB query format. The filter parameter can be used with GET and DELETE methods on resource URLs.

To use a resource field with the filter parameter, the field must be defined on the resource and the Filterable field property must be set to true. You may want to ensure the backend database has this field indexed when enabled.

To specify equality condition, use the query {<field>: <value>} to select all items with <field> equal <value>. REST Layer will complain with a 422 HTTP error if any field queried is not defined in the resource schema or is using an operator incompatible with field type (i.e.: $lt on a string field).

A query can specify conditions for more than one field. Implicitly, a logical AND conjunction connects the clauses so that the query selects the items that match all the conditions.

It is also possible to use an explicit $and operator to join each clause with a logical AND. There are sometimes good use-cases for this, such as when joining two independent $or queries that must both match, or when programmatically merging multiple queries with potentially overlapping fields.

{$and: [
  {$or: [{quantity: {$gt: 100}}, {price: {$lt: 9.95}}]},
  {$or: [{length: {$lt: 1000}}, {width: {$lt: 1000}}
]}

Using the the $or operator, you can specify a compound query that joins each clause with a logical OR conjunction so that the query selects the items that match at least one condition.

In the following example, the query document selects all items in the collection where the field quantity has a value greater than ($gt) 100 or the value of the price field is less than ($lt) 9.95:

{$or: [{quantity: {$gt: 100}}, {price: {$lt: 9.95}}]}

Match on sub-fields is performed through field path separated by dots. This example shows an exact match on the sub-fields country and city of the address sub-document:

{address.country: "France", address.city: "Paris"}

Some operators can change the type of match. For instance $in can be used to match a field against several values. For instance, to select all items with the type field equal either food or snacks, use the following query:

{type: {$in: ["food", "snacks"]}}

The opposite $nin is also available.

The following numeric comparisons operators are supported: $lt, $lte, $gt, $gte.

The $exists operator matches documents containing the field, even if this field is null.

{type: {$exists: true}}

You can invert the operator by passing false.

There is also a $regex operator that matches documents containing the field given as a regular expression. In general, the syntax of the regular expressions accepted is the same general syntax used by Perl, Python, and other languages. More precisely, it is the syntax accepted by RE2 and described at https://golang.org/s/re2syntax, except for \C.

Flags are supported for more control over regular expressions. Flag syntax is xyz (set) or -xyz (clear) or xy-z (set xy, clear z). The flags are:

Flag Mode Default
i case-insensitive false
m multi-line mode: ^ and $ match begin/end line in addition to begin/end text false
s let . match \n false
U non-greedy: swap meaning of x* and x*?, x+ and x+?, etc false

For example the following regular expression would match any document with a field type and its value rest-layer.

{type: {$regex: "re[s]{1}t-la.+r"}}

The same example with flags:

{type: {$regex: "(?i)re[s]{1}t-LAYER"}}

However, keep in mind that Storers have to support regular expression and depending on the implementation of the storage handler the accepted syntax may vary. An error of ErrNotImplemented will be returned for those storage back-ends which do not support the $regex operator.

The operator $not functions as an opposite operator to $regex. Unlike MongoDB, we do not allow $not as a general negation operator.

The $elemMatch operator matches documents that contain an array field with at least one element that matches all the specified query criteria.

			"telephones": schema.Field{
				Filterable: true,
				Validator: &schema.Array{
					Values: schema.Field{
						Validator:  &schema.Object{Schema: &Telephone},
					},
				},
			},

Matching documents that contain specific values within array objects can be done with $elemMatch:

{telephones: {$elemMatch: {name: "John Snow", active: true}}}

The snippet above will return all documents, which telephones array field contains objects that have name AND active fields matching queried values.

Note that documents returned may contain other objects in telephones that don't match the query above, but at least one object will do. Further filtering could be needed on the API client side.

$elemMatch Limitation

$elemMatch will work only for arrays of objects for now. Later it could be extended to work on plain arrays e.g:

{numbers: {$elemMatch: {$gt: 20}}}

Filter operators

Operator Usage Description
$or {$or: [{a: "b"}, {a: "c"}]} Join two clauses with a logical OR conjunction.
$and {$and: [{a: "b"}, {b: "c"}]} Join two clauses with a logical AND conjunction.
$in {a: {$in: ["b", "c"]}} Match a field against several values.
$nin {a: {$nin: ["b", "c"]}} Opposite of $in.
$lt {a: {$lt: 10}} Fields value is lower than specified number.
$lte {a: {$lte: 10}} Fields value is lower than or equal to the specified number.
$gt {a: {$gt: 10}} Fields value is greater than specified number.
$gte {a: {$gte: 10}} Fields value is greater than or equal to the specified number.
$exists {a: {$exists: true}} Match if the field is present (or not if set to false) in the item, event if nil.
$regex {a: {$regex: "fo[o]{1}"}} Match regular expression on a field's value.
$not {a: {$not: "fo[o]{1}"}} Opposite of $regex.
$elemMatch {a: {$elemMatch: {b: "foo"}}} Match array items against multiple query criteria.

Some storage handlers may not support all operators. Refer to the storage handler's documentation for more info.

Sorting

Sorting of resource items is defined through the sort query-string parameter. The sort value is a list of resource's fields separated by comas (,). To invert a field's sort, you can prefix its name with a minus (-) character. The sort parameter can be used with GET and DELETE methods on resource URLs.

To use a resource field with the sort parameter, the field must be defined on the resource and the Sortable field property must be set to true. You may want to ensure the backend database has this field indexed when enabled.

Here we sort the result by ascending quantity and descending create time:

/posts?sort=quantity,-created

Field Selection

REST APIs tend to grow over time. Resources get more and more fields to fulfill the needs for new features. But each time fields are added, all existing API clients automatically get the additional cost. This tend to lead to huge waste of bandwidth and added latency due to the transfer of unnecessary data. As a workaround, the field parameter can be used to minimize and customize the response body from requests with a GET, POST, PUT or PATCH method on resource URLs.

REST Layer provides a powerful fields selection (also named projection) system. If you provide the fields parameter with a list of fields for the resource you are interested in separated by commas, only those fields will be returned in the document:

$ http -b :8080/api/users/ar6eimekj5lfktka9mt0 fields=='id,name'
{
    "id": "ar6eimekj5lfktka9mt0",
    "name": "John Doe"
}

If your document has sub-fields, you can use brackets to select sub-fields:

$ http -b :8080/api/users/ar6eimekj5lfktka9mt0/posts fields=='meta{title,body}'
[
    {
        "_etag": "ar6eimukj5lfl07r0uv0",
        "meta": {
            "title": "test",
            "body": "example"
        }
    }
]

Also all fields expansion is supported:

$ http -b :8080/api/users/ar6eimekj5lfktka9mt0/posts fields=='*,user{*}'
[
    {
        "_etag": "ar6eimukj5lfl07r0uv0",
        "id": "ar6eimukj5lfl07r0ugz",
        "created": "2015-07-27T21:46:55.355857401+02:00",
        "updated": "2015-07-27T21:46:55.355857989+02:00",
        "user": {
          "id": "ar6eimukj5lfl07gzb0b",
          "created": "2015-07-24T21:46:55.355857401+02:00",
          "updated": "2015-07-24T21:46:55.355857989+02:00",
          "name": "John Snow",
        },
        "meta": {
            "title": "test",
            "body": "example"
        }
    }
]

Field Aliasing

It's also possible to rename fields in the response using aliasing. To create an alias, prefix the field name by the wanted alias separated by a colon (:):

$ http -b :8080/api/users/ar6eimekj5lfktka9mt0 fields=='id,name,n:name'
{
    "id": "ar6eimekj5lfktka9mt0",
    "n": "John Doe",
    "name": "John Doe"
}

As you see, you can specify the same field several times. It doesn't seem useful in this example, but with fields parameters, it becomes very powerful (see below).

Aliasing works with sub-fields as well:

$ http -b :8080/api/users/ar6eimekj5lfktka9mt0/posts fields=='meta{title,b:body}'
[
    {
        "_etag": "ar6eimukj5lfl07r0uv0",
        "meta": {
            "title": "test",
            "b": "example"
        }
    }
]

Field Parameters

Field parameters are used to apply a transformation on the value of a field using custom logic.

For instance, if you are using an on demand dynamic image resizer, you may want to expose the capability of this service, without requiring from the client to learn another URL based API. Wouldn't it be better if we could just ask the API to return the thumbnail_url dynamically transformed with the desired dimensions?

By combining field aliasing and field parameters, we can expose this resizer API as follow:

$ http -b :8080/api/videos fields=='id,
                                    thumb_small_url:thumbnail_url(width:80,height:60),
                                    thumb_large_url:thumbnail_url(width:800,height:600)'
[
    {
        "_etag": "ar6eimukj5lfl07r0uv0",
        "thumb_small_url": "http://cdn.com/path/to/image-80w60h.jpg",
        "thumb_large_url": "http://cdn.com/path/to/image-800w600h.jpg"
    }
]

The example above show the same field represented twice but with some useful value transformations.

To add parameters on a field, use the Params property of the schema.Field type as follow:

schema.Schema{
	Fields: schema.Fields{
		"field": {
			Params: schema.Params{
				"width": {
					Description: "Change the width of the thumbnail to the value in pixels",
					Validator: schema.Integer{}
				},
				"height": {
					Description: "Change the width of the thumbnail to the value in pixels",
					Validator: schema.Integer{},
				},
			},
			Handler: func(ctx context.Context, value interface{}, params map[string]interface{}) (interface{}, error) {
				// your transformation logic here
				return value, nil
			},
		},
	},
}

Only parameters listed in the Params field will be accepted. You Handler function is called with the current value of the field and parameters sent by the user if any. Your function can apply wanted transformations on the value and return it. If an error is returned, a 422 error will be triggered with your error message associated to the field.

Embedding

With sub-fields notation you can also request referenced resources or connections (sub-resources). REST Layer will recognize them automatically and fetch the associated resources in order embed their data in the response. This can save a lot of unnecessary sequential round-trips:

$ http -b :8080/api/users/ar6eimekj5lfktka9mt0/posts \
  fields=='meta{title},user{id,name},comments(sort:"-created",limit:10){user{id,name},body}'
  [
    {
        "_etag": "ar6eimukj5lfl07r0uv0",
        "meta": {
            "title": "test"
        },
        "user": {
            "id": "ar6eimul07lfae7r4b5l",
            "name": "John Doe"
        },
        "comments": [
            {
                "user": {
                    "id": "ar6emul0kj5lfae7reimu",
                    "name": "Paul Wolf"
                },
                "body": "That's awesome!"
            },
            ...
        ]
    },
    ...
]

In the above example, the user field is a reference on the users resource. REST Layer did fetch the user referenced by the post and embedded the requested sub-fields (id and name). Same for comments: comments is set as a sub-resource of the posts resource. With this syntax, it's easy to get the last 10 comments on the post in the same REST request. For each of those comment, we asked to embed the user field referenced resource with id and name fields again.

Notice the sort and limit parameters passed to the comments field. Those are field parameter automatically exposed by connections to let you control the embedded list order, filter and pagination. You can use sort, filter, skip, page and limit parameters with those field with the same syntax as their top level query-string parameter counterpart.

Such request can quickly generate a lot of queries on the storage handler. To ensure a fast response time, REST layer tries to coalesce those storage requests and to execute them concurrently whenever possible.

Pagination

Pagination is supported on collection URLs using the page and limit query-string parameters and can be used for resource list view URLs with request method GET and DELETE. If you don't define a default pagination limit using PaginationDefaultLimit resource configuration parameter, the resource won't be paginated for list GET requests until you provide the limit query-string parameter. The PaginationDefaultLimit does not apply to list DELETE requests, but the limit and page parameters may still be used to delete a subset of items.

If your collections are large enough, failing to define a reasonable PaginationDefaultLimit parameter may quickly render your API unusable.

Skipping

Skipping of resource items is defined through the skip query-string parameter. The skip value is a positive integer defining the number of items to skip when querying for items, and can be applied for requests with method GET or DELETE.

Skip the first 10 items of the result:

/posts?skip=10

Return the first 2 items after skipping the first 10 of the result:

/posts?skip=10&limit=2

The skip parameter can be used in conjunction with the page parameter. You may want them both when for instance, you show the first N elements of a list and then allow to paginate the remaining items:

Show the first 2 elements:

/posts?limit=2

Paginate the rest of the list:

/posts?skip=2&page=1&limit=10

Authentication and Authorization

REST Layer doesn't provide any kind of support for authentication. Identifying the user is out of the scope of a REST API, it should be performed by an OAuth server. The OAuth endpoints could be either hosted on the same code base as your API or live in a different app. The recommended way to integrate OAuth or any other kind of authentication with REST Layer is through a signed token like JWT.

In this schema, the authentication service identifies the user and stores data relevant to the user's identification in a JWT token. This token is sent to the API client as a bearer token, through the access-token query-string parameter or the Authorization HTTP header. A http middleware then decodes and verifies this token, extracts user's info from it and stores it into the context. In REST layer, user info is now accessible from your resource hooks so you can change the query lookup or ensure mutated objects are owned by the user in order to handle the authorization part.

See the JWT auth example for more info.

Conditional Requests

Each stored resource provides information on the last time it was updated (Last-Modified), along with a hash value computed on the representation itself (ETag). These headers allow clients to perform conditional requests by using the If-Modified-Since header:

$ http :8080/users/ar6ej4mkj5lfl688d8lg If-Modified-Since:'Wed, 05 Dec 2012 09:53:07 GMT'
HTTP/1.1 304 Not Modified

or the If-None-Match header:

$ http :8080/users/ar6ej4mkj5lfl688d8lg If-None-Match:'"1234567890123456789012345678901234567890"'
HTTP/1.1 304 Not Modified

Data Integrity and Concurrency Control

API responses include a ETag header which also allows for proper concurrency control. An ETag is a hash value representing the current state of the resource on the server. Clients may choose to ensure they update (PATCH or PUT) or delete (DELETE) a resource in the state they know it by providing the last known ETag for that resource. This prevents overwriting items with obsolete data.

Consider the following workflow:

$ http PATCH :8080/users/ar6ej4mkj5lfl688d8lg If-Match:'"1234567890123456789012345678901234567890"' \
    name='John Doe'
HTTP/1.1 412 Precondition Failed

What went wrong? We provided a If-Match header with the last known ETag, but its value did not match the current ETag of the item currently stored on the server, so we got a 412 Precondition Failed.

When this happens, it's up to the client to decide whether to inform the user of the error and/or re-fetch the latest version of the document to get the latest ETag before retrying the operation.

$ http PATCH :8080/users/ar6ej4mkj5lfl688d8lg If-Match:'"80b81f314712932a4d4ea75ab0b76a4eea613012"' \
    name='John Doe'
HTTP/1.1 200 OK
Etag: "7bb7a71b0f66197aa07c4c8fc9564616"
Last-Modified: Mon, 27 Jul 2015 19:36:19 GMT

This time the update operation was accepted and we got a new ETag for the updated resource.

Concurrency control header If-Match can be used with all mutation methods on item URLs: PATCH (update), PUT (replace) and DELETE (delete).

Data Validation

Data validation is provided out-of-the-box. Your configuration includes a schema definition for every resource managed by the API. Data sent to the API to be inserted/updated will be validated against the schema, and a resource will only be updated if validation passes. See Field Definition section to know more about how to configure your validators.

$ http  :8080/api/users name:=1 foo=bar
HTTP/1.1 422 status code 422
Content-Length: 110
Content-Type: application/json
Date: Thu, 30 Jul 2015 21:56:39 GMT
Vary: Origin

{
    "code": 422,
    "message": "Document contains error(s)",
    "issues": {
        "foo": [
            "invalid field"
        ],
        "name": [
            "not a string"
        ]
    }
}

In the example above, the document did not validate so the request was rejected with description of the errors for each fields.

Nullable Values

To allow null value in addition the field type, you can use schema.AnyOf validator:

"nullable_field": {
	Validator: schema.AnyOf{
		schema.String{},
		schema.Null{},
	},
}

Extensible Data Validation

It is very easy to add new validators. You just need to implement the schema.FieldValidator:

type FieldValidator interface {
	Validate(value interface{}) (interface{}, error)
}

The Validate method takes the value as argument and must either return the value back with some eventual transformation or an error if the validation failed.

Your validator may also implement the optional schema.Compiler interface:

type Compiler interface {
	Compile() error
}

When a field validator implements this interface, the Compile method is called at the server initialization. It's a good place to pre-compute some data (i.e.: compile regexp) and verify validator configuration. If validator configuration contains issues, the Compile method must return an error, so the initialization of the resource will generate a fatal error.

A validator may implement some advanced serialization or transformation of the data to optimize its storage. In order to read this data back and put it in a format suitable for JSON representation, a validator can implement the schema.FieldSerializer interface:

type FieldSerializer interface {
	Serialize(value interface{}) (interface{}, error)
}

When a validator implements this interface, the method is called with the field's value just before JSON marshaling. You should return an error if the format stored in the db is invalid and can't be converted back into a suitable representation.

See schema.IP validator for an implementation example.

Timeout and Request Cancellation

REST Layer respects context deadline from end to end. Timeout and request cancellation are thus handled through context. Since Go 1.8, context is cancelled automatically if the user closes the connection.

When a request is stopped because the client closed the connection (context cancelled), the response HTTP status is set to 499 Client Closed Request (for logging purpose). When a timeout is set and the request has reached this timeout, the response HTTP status is set to 509 Gateway Timeout.

Logging

You can customize REST Layer logger by changing the resource.Logger function to call any logging framework you want.

We recommend using zerolog. To configure REST Layer with zerolog, proceed as follow:

// Init an alice handler chain (use your preferred one)
c := alice.New()

// Install a logger
c = c.Append(hlog.NewHandler(log.With().Logger()))

// Log API accesses
c = c.Append(hlog.AccessHandler(func(r *http.Request, status, size int, duration time.Duration) {
	hlog.FromRequest(r).Info().
		Str("method", r.Method).
		Str("url", r.URL.String()).
		Int("status", status).
		Int("size", size).
		Dur("duration", duration).
		Msg("")
}))

// Add some fields to per-request logger context
c = c.Append(hlog.RequestHandler("req"))
c = c.Append(hlog.RemoteAddrHandler("ip"))
c = c.Append(hlog.UserAgentHandler("ua"))
c = c.Append(hlog.RefererHandler("ref"))
c = c.Append(hlog.RequestIDHandler("req_id", "Request-Id"))

// Install zerolog/rest-layer adapter
resource.LoggerLevel = resource.LogLevelDebug
resource.Logger = func(ctx context.Context, level resource.LogLevel, msg string, fields map[string]interface{}) {
	zerolog.Ctx(ctx).WithLevel(zerolog.Level(level)).Fields(fields).Msg(msg)
}

See zerolog documentation for more info.

CORS

REST Layer doesn't support CORS internally but relies on an external middleware to do so. You may use the CORS middleware to add CORS support to REST Layer if needed. Here is a basic example:

package main

import (
	"log"
	"net/http"

	"github.com/rs/cors"
	"github.com/rs/rest-layer/resource"
	"github.com/rs/rest-layer/rest"
)

func main() {
	index := resource.NewIndex()

	// configure your resources

	api, err := rest.NewHandler(index)
	if err != nil {
		log.Fatalf("Invalid API configuration: %s", err)
	}

	handler := cors.Default().Handler(api)
	log.Fatal(http.ListenAndServe(":8080", handler))
}

JSONP

In general you donā€™t really want to add JSONP when you can use CORS instead:

There have been some criticisms raised about JSONP. Cross-origin resource sharing (CORS) is a more recent method of getting data from a server in a different domain, which addresses some of those criticisms. All modern browsers now support CORS making it a viable cross-browser alternative (source.) There are circumstances however when you do need JSONP, like when you have to support legacy software (IE6 anyone?)

As for CORS, REST Layer doesn't support JSONP directly but rely on an external middleware. Such a middleware is very easy to write. Here is an example:

package main

import (
	"log"
	"net/http"

	"github.com/rs/rest-layer/resource"
	"github.com/rs/rest-layer/rest"
)

func main() {
	index := resource.NewIndex()

	// configure your resources

	api, err := rest.NewHandler(index)
	if err != nil {
		log.Fatalf("Invalid API configuration: %s", err)
	}

	handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
		fn := r.URL.Query().Get("callback")
		if fn != "" {
			w.Header().Set("Content-Type", "application/javascript")
			w.Write([]byte(";fn("))
		}
		api.ServeHTTP(w, r)
		if fn != "" {
			w.Write([]byte(");"))
		}
	})
	log.Fatal(http.ListenAndServe(":8080", handler))
}

Data Storage Handler

REST Layer doesn't handle storage of resources directly. A mem.MemoryHandler is provided as an example but should be used for testing only.

A resource storage handler is easy to write though. Some handlers for popular databases are available, but you may want to write your own to put an API in front of anything you want. It is very easy to write a data storage handler, you just need to implement the resource.Storer interface:

type Storer interface {
	Find(ctx context.Context, q *query.Query) (*ItemList, error)
	Insert(ctx context.Context, items []*Item) error
	Update(ctx context.Context, item *Item, original *Item) error
	Delete(ctx context.Context, item *Item) error
	Clear(ctx context.Context, q *query.Query) (int, error)
}

Mutation methods like Update and Delete must ensure they are atomically mutating the same item as specified in argument by checking their ETag (the stored ETag must match the ETag of the provided item). In case the handler can't guarantee that, the storage must be left untouched and a resource.ErrConflict must be returned.

If the operation is not immediate, the method must listen for cancellation on the passed ctx. If the operation is stopped due to context cancellation, the function must return the result of the ctx.Err() method. See this blog post for more information about how context works.

If the backend storage is able to efficiently fetch multiple document by their id, it can implement the optional resource.MultiGetter interface. REST Layer will automatically use it whenever possible.

See resource.Storer documentation for more information on resource storage handler implementation details.

Custom Response Formatter / Sender

REST Layer lets you extend or replace the default response formatter and sender. To write a new response format, you need to implement the rest.ResponseFormatter interface:

// ResponseFormatter defines an interface responsible for formatting a the different types of response objects
type ResponseFormatter interface {
	// FormatItem formats a single item in a format ready to be serialized by the ResponseSender
	FormatItem(ctx context.Context, headers http.Header, i *resource.Item, skipBody bool) (context.Context, interface{})
	// FormatList formats a list of items in a format ready to be serialized by the ResponseSender
	FormatList(ctx context.Context, headers http.Header, l *resource.ItemList, skipBody bool) (context.Context, interface{})
	// FormatError formats a REST formated error or a simple error in a format ready to be serialized by the ResponseSender
	FormatError(ctx context.Context, headers http.Header, err error, skipBody bool) (context.Context, interface{})
}

You can also customize the response sender responsible for the serialization of the formatted payload:

// ResponseSender defines an interface responsible for serializing and sending the response
// to the http.ResponseWriter.
type ResponseSender interface {
	// Send serialize the body, sets the given headers and write everything to the provided response writer
	Send(ctx context.Context, w http.ResponseWriter, status int, headers http.Header, body interface{})
}

Then set your response formatter and sender on the REST Layer HTTP handler like this:

api, _ := rest.NewHandler(index)
api.ResponseFormatter = &myResponseFormatter{}
api.ResponseSender = &myResponseSender{}

You may also extend the DefaultResponseFormatter and/or DefaultResponseSender if you just want to wrap or slightly modify the default behavior:

type myResponseFormatter struct {
	rest.DefaultResponseFormatter
}

// Add a wrapper around the list with pagination info
func (r myResponseFormatter) FormatList(ctx context.Context, headers http.Header, l *resource.ItemList, skipBody bool) (context.Context, interface{}) {
	ctx, data := r.DefaultResponseFormatter.FormatList(ctx, headers, l, skipBody)
	return ctx, map[string]interface{}{
		"meta": map[string]int{
			"offset": l.Offset,
			"total":  l.Total,
		},
		"list": data,
	}
}

GraphQL

In parallel with the REST API handler, REST Layer is also able to handle GraphQL queries (mutation will come later). GraphQL is a query language created by Facebook which provides a common interface to fetch and manipulate data. REST Layer's GraphQL handler is able to read a resource.Index and create a corresponding GraphQL schema.

GraphQL doesn't expose resources directly, but queries. REST Layer take all the resources defined at the root of the resource.Index and create two GraphQL queries for each one. One query is just the name of the endpoint, so /users would result in users and another is the name of the endpoint suffixed with List, as usersList. The item query takes an id parameter and the list queries takes skip, page, limit, filter and sort parameters. All sub-resources are accessible using GraphQL sub-selection syntax.

If your resource defines aliases, some additional GraphQL queries are exposed with their name constructed as the name of the resource suffixed with the name of the alias with a capital. So for users with an alias admin, the query would be usersAdmin.

You can bind the GraphQL endpoint wherever you want as follow:

index := resource.NewIndex()
// Bind some resources

h, err := graphql.NewHandler(index)
if err != nil {
	log.Fatal(err)
}
http.Handle("/graphql", h)
http.ListenAndServe(":8080", nil)

GraphQL support is experimental. Only querying is supported for now, mutation will come later. Sub-queries are executed sequentially and may generate quite a lot of query on the storage backend on complex queries. You may prefer the REST endpoint with field selection which benefits from a lot of optimization for now.

Hystrix

REST Layer supports Hystrix as a circuit breaker. You can enable Hystrix on a per resource basis by wrapping the storage handler using rest-layer-hystrix:

import "github.com/rs/rest-layer-hystrix"

index.Bind("posts", post, restrix.Wrap("posts", mongo.NewHandler()), resource.DefaultConf)

When wrapped this way, one Hystrix command is created per storage handler action, with the name formatted as <name>.<Action>. Possible actions are:

  • Find: when a collection of items is requested.
  • Insert: when items are created.
  • Update: when items are modified.
  • Delete: when a single item is deleted by its id.
  • Clear: when a collection of items matching a filter are deleted.
  • MultiGet: when several items are retrieved by their ids (on storage handler supporting MultiGetter interface.

Once enabled, you must configure Hystrix for each command and start the Hystrix metrics stream handler.

See Hystrix godoc for more info and examples/hystrix for a complete usage example with REST layer.

JSONSchema

It is possible to convert a schema to JSON Schema with some limitations for certain schema fields. Currently, we implement JSON Schema Draft 4 core and validation specifications. In addition, we have implemented "readOnly" from the less commonly used hyper-schema specification.

Example usage:

import "github.com/rs/rest-layer/schema/encoding/jsonschema"

b := new(bytes.Buffer)
enc := jsonschema.NewEncoder(b)
if err := enc.Encode(aSchema); err != nil {
  return err
}
fmt.Println(b.String()) // Valid JSON Document describing the schema.

Custom FieldValidators

For a custom FieldValidator to support encoding to JSON Schema, it must implement the jsonschema.Builder interface:

// The Builder interface should be implemented by custom schema.FieldValidator implementations to allow JSON Schema
// serialization.
type Builder interface {
	// BuildJSONSchema should return a map containing JSON Schema Draft 4 properties that can be set based on
	// FieldValidator data. Application specific properties can be added as well, but should not conflict with any
	// legal JSON Schema keys.
	BuildJSONSchema() (map[string]interface{}, error)
}

To easier extend a FieldValidator from the schema package, you can call ValidatorBuilder inside BuildJSONSchema():

type Email struct {
	schema.String
}

func (e Email) BuildJSONSchema() (map[string]interface{}, error) {
	parentBuilder, _ = jsonschema.ValidatorBuilder(e.String)
	m, err := parentBuilder.BuildJSONSchema()
	if err != nil {
		return nil, err
	}
	m["format"] = "email"
	return m, nil
}

Sub-schema Limitation

Sub-schemas only get converted to JSON Schema, if you specify a sub-schema via setting a Field's Validator attribute to a schema.Object instance. Use of the Field's Schema field is not supported. Instead we hope #77 will be implemented.

schema.Dict Limitations

schema.Dict only supports nil and schema.String as KeysValidator values. Note that some less common combinations of schema.String attributes will lead to usage of an allOf construct with duplicated schemas for values. This is to avoid usage of regular expression expansions that only a subset of implementations actually support.

The limitation in KeysValidator values arise because JSON Schema draft 4 (and draft 5) support for key validation is limited to properties, patternProperties and additionalProperties. This essentially means that there can be no JSON Schema object supplied for key validation, but that we need to rely on exact match (properties), regular expressions (patternProperties) or no key validation (additionalProperties).

schema.Reference Provisional Support

The support for schema.Reference is purely provisional and simply returns an empty object {}, meaning it does not give any hint as to which validation the server might use.

With a potential later implantation of the OpenAPI Specification (a.k.a. the Swagger 2.0 Specification), the goal is to refer to the ID field of the linked resource via an object {"$ref": "#/definitions/<unique schema title>/id"}. This is tracked via issue #36.

schema.URL Limitations

The current serialization of schema.URL always returns a schema {"type": "string", "format": "uri"}, ignoring any struct attributes that affect the actual validation within rest-layer. The JSON Schema is thus not completely accurate for this validator.

Note that JSON Schema draft 5 adds uriref, which could allow us to at least document whether AllowRelative is true or false. JSON Schema also allow application specific additional formats to be defined, but it's not practical to create a custom format for any possible struct attribute combination.

Licenses

All source code is licensed under the MIT License.

rest-layer's People

Contributors

apuigsech avatar bookwyrm12 avatar dragomir-ivanov avatar mishak87 avatar muyadan avatar omani avatar quentinperez avatar robvadai avatar rs avatar schwarmco avatar sebest avatar smyrman avatar torie avatar tsetsoo avatar uhgh avatar ultimateboy avatar uroshercog avatar vvelikodny avatar whilei avatar yanfali avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rest-layer's Issues

Feature Request: Provide Unique Field Definition

with a boolean field definition named "Unique" we could avoid the need for a hook just to make sure the provided field (name, position, whatever....or something which is supposed to be unique) is not already in the db.

this would result in much less code (no hook needed).

are there any plans to implement this?

Feature: Make custom HTTP methods

Hi,

it would be super awesome to be able to create or have custom http verbs (methods) so I can be more flexible with my own storage handler (Storer).

Something like

curl -XOPEN /api/some/route/to/media/file
curl -XBAN /api/some/route/to/bridge/ip
curl -XDELEGATE /api/some/route/to/proxy/zone

can we implement this or any plans to make this happen?

[JSON Schema] Support for schema.Reference

UPDATED

This ticket is now about extending schema.Reference support in the schema/encoding/jsonschema package when adding support for the OpenAPI Specification (a.k.a. the Swagger 2.0. specification).

More specifically, schema.Reference fields should be encoded as {"$ref": "#definitions/<unique resource path>/id"} when encoding a schema for an OpenAPI Specification resource.

This might require:

  • The ability to "name" a schema when adding it to the Index.
  • Changes the schema.Reference or grant access to the Index to the encoder.
  • The ability to turn on/off a JSON Schema encoding "extension".

PS! This ticket is currently blocked on adding initial support for the OpenAPI Specification.

Validate a field based on the value of another field ?

Hello,

I was wondering how to validate a field based on the value of another field; ex:

{"doc": {"type": "x", "value": "xxxxx"}}
{"doc": {"type": "y", "value": "yyy-yyy"}}

if doc type is "x", allowed value format is "xxxxx"; if it's "y", allowed value format is "yyy-yyy".

Currently, validators seem to be for a single value only. What is the best way to do it ?

Thanks.

q: resource error types

I'm writing a Storer implementation, and definitely appreciate the detailed comments, and example implementations. Does it makes sense to have an error to cover general DB failures unrelated to the specific operation (such as connection error)?

How do I use rest-layer with gorilla mux?

I want to use gorilla mux for my other routes so I can filter on methods and hostnames etc. Just what gorilla mux offers in general. Now with the below setup my rest-layer API does not work. I get NOT FOUND back. but my other routes do work.

r := mux.NewRouter()
r.HandleFunc("/me", getMe(jwtSecretBytes))
r.HandleFunc("/websocket", serveHome)
r.Handle("/files", gridfsapi)
r.Handle("/", c.Then(api))

log.Println(fmt.Sprintf("Serving API on http://localhost:%d", port))

if err := http.ListenAndServe(fmt.Sprintf(":%d", port), r); err != nil {
	log.Fatal(err)
}

Shouldn't api which is just a rest.NewHandler(index) work with gorilla mux?

Bypass Store from FindEventHandler

Hello,

It looks like FindEventHandler does not support any return except error. Is there any way to implement caching to bypass going to the store?

It seems like the only way to do this at present is to implement your own storage extension.

Is there any plans to support 'tiered' storage extensions? I think the ES storage extension is very good, but in most cases ES is not used as a canonical store. A tiered storage extension would allow for a canonical store that can utilize ES (or redis) as a secondary index which can be guaranteed 'up to date' with the final storage backend through use of hooks.

[jsonschema] Bug: error not returned from Encoder

While donig some test-cleanup in the jsonschema package, I found a bug where errors in encoding a validator are never returned from jsonschema.Encoder.

I found this bug while changing this test:

-func TestErrNotImplemented(t *testing.T) {
-       validator := &schema.IP{}
-       b := new(bytes.Buffer)
-       assert.Equal(t, ErrNotImplemented, validatorToJSONSchema(b, validator))
-}

Into one that tests the public interface:

+type dummyValidator struct{}
+
+func (v dummyValidator) Validate(value interface{}) (interface{}, error) {
+       return value, nil
+}
+
+func TestErrNotImplemented(t *testing.T) {
+       s := schema.Schema{
+               Fields: schema.Fields{
+                       "i": {
+                               Validator: &dummyValidator{},
+                       },
+               },
+       }
+       enc := jsonschema.NewEncoder(new(bytes.Buffer))
+       assert.Equal(t, jsonschema.ErrNotImplemented, enc.Encode(&s))
+}

PS! The result is the same if &schema.IP{} is used as Validator in the new test.

I am trying to see if I can fix this together with my change.

[jsonschema] Rethink how jsonschema deals with encoding internally

This issue is strongly linked to #35.

There are a couple of issues with the current implementation of the jsonschema package:

  1. Encoding for different Validator types is all done by one big switch-case in one function. This has a extensability and readability problem.
  2. The encoder function is "hard-coded" to add commas in the correct places, which has proven error prone, e.g. when using nil validators, as showed in #50. Sidenote: For the specific example, it still need to be discussed weather nil validators for a field is and/or should be accepted by the restlayer API or not, but as I pointed out in #50, there are real use-cases where it would be useful to accept it.
  3. The interface for end-users to implement for their own custom Validator implementations proposed in #35, currently proposing to "mirror" the current internal behavior, require the end-user to serialize a partial JSON object (i.e. exluding the {, }), and correctly deal with commas and white-space.

I propose having a look at how to better implement a more easy-to-use interface for #35, and then use the same interface internally. I also propose to split the validator encoder function, so that there is one encoder type and/or function per supported validaor type. Each of these encoders will get a separate file, mirroring the layout of the schema package.

I am happy coming up with a proposal/PR as I get time to think deeper about it and try out some code..

[jsonschema] Support for additional types

This ticket aims to add support for translating unhandled validators from the schema package to a JSON Schema type (with an appropriate "format"). It is assumed that the JSON fields below are added to the same JSON object that is created for the parent schema.Field type.

Relevant background on "format"

"format" is covered by the validation spec section 7. Semantic validation with "format". This spec specifies some formats, as well as allowing implementations to define custom formats when needed. Where possible, I will fall back to suggest a format from The Swagger 2.0 spec if the JSON spec fails to define any format in particular, and the format would provide useful information for end-users or tools.

String types

For schema.IP, JSON Schema defines only "IPv4" and "IPv6". The most correct here is to leave the type in the outer schema, and then use a "oneOf" clause to encapsulate the "format" part of the schema as a "sub-schema":

"type": "string",
"oneOf": [
    {"format": "IPv4"},
    {"format": "IPv6"},
]

For schema.URL we could go for just:

"type": "string",
"format": "uri"

This ignores all the validation options that might be set in a schema.URL package, which is not ideal, but perhaps good enough. While this is a pragmatic solution, the correctness of this implemenetation might be argued, as the spec staes everythin accepted as a valid URI by https://tools.ietf.org/html/rfc3986.

For schema.Password we could go for:

"type": "string",
"format": "password"

Just like for schema.String, "minLength" and "maxLength" should be encoded from minLen and maxLen when set.

Note that the "password" format is not defined by JSON Schema. It is however defined by Swagger 2.0, and it seams reasonable to align with Swagger on extensions when possible.

The purpose of the format according to Swagger 2.0: "Used to hint UIs the input needs to be obscured."

Other validators

schema.Null should simply be represented as:

"type": "null"

This type is documented in the core spec section 3.5. JSON Schema primitive types

Can we make ETAG presence in PATCH/PUT/DELETE requests mandatory?

from the docs:

Concurrency control header If-Match can be used with all mutation methods on item URLs: PATCH (update), PUT (replace) and DELETE (delete).

I would argue to make the If-Match header mandatory for all the above three http methods so it is ensured that concurrency control is taking place. By forcing the client to use the conditional requests (with an If-Match header) you embrace the fact that the backend does indeed concurrency control. Allowing a client to opt out of this could result in unexpected behavour when modifying a resource.

schema.Compile() clarification

@rs I have a question about who invokes Compile if you use rs/rest-layer/schema outside of rs/rest-layer. Would it be accurate to say that when used in a standalone manner, all users of schema must invoke Compile() on schema.Schema instances to guarantee they are correctly initialized?

If this is the case I will probably send you a PR with a documentation update :)

HATEOAS Support

Olivier, I just came across this project while looking for schema related packages. There is a lot of overlap between what you are doing here and some stuff I've been tinkering with. However, one big difference is that I am very much approaching my stuff from a HATEOAS perspective where the API is discoverable and the representations use hypermedia formats like HAL, Siren, etc.

I was just curious if you've thought about such approaches in the context of rest-layer?

In any case, I really like a lot of the ideas you've introduced.

P.S. - This isn't exactly an "issue", but your profile doesn't include an email address so I added here. Sorry for the clutter. Feel free to contact me by email if you want to take this offline.

Question: Approach to pass custom parameters to my Google Datastore handler

I've largely completed a Google Cloud Datastore handler, see https://github.com/ajcrowe/rest-layer-datastore

However I would like to be able to control whether a field is indexed within Datastore for each Field. What approach would you recommend for this?

Perhaps extending Field to have a StorageOptions param which can be schema.StorageOptions which is a map[string]interface{}?

Like this?

type StorageOptions map[string]interface{}

user = schema.Schema{
		Description: `Represents a user`,
		Fields: schema.Fields{
			"id":      schema.IDField,
			"created": schema.CreatedField,
			"updated": schema.UpdatedField,
			"name": {
				Required:   true,
				Filterable: true,
				Sortable:   true,
				Validator: &schema.String{
					MaxLen: 150,
				},
				StorageOptions: &schema.StorageOptions{
					"index": false,
				},
			},
...

Wanted to check this approach.

Question: Custom routes

Hi,

if I want to enjoy all the benefits of rest-layer (data validation, json endec, hooks, etc.) but don't want to actually work with the underlying storage backend. how would I do it?

I came up with the idea of using the mem storage so I can bind a route. it works but it is ugly.

Are there any plans to make custom routing available? at the end it is just an index.Bind() without a storage handler. Or any other hints?

JWT-auth example allway fails

Hello,

I'm trying to play with jwt-auth example, but it's allway got errors, Could you please help me this point? Thanks alot !

$ http :8080/posts access_token==eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VyX2lkIjoiamFjayJ9.i74uQMaftKhwZri0XaTMTnqiBY0cmuMu27Yuv8WUy68 title="Jack's post"
HTTP/1.1 422 Unprocessable Entity
Content-Length: 82
Content-Type: application/json
Date: Sat, 12 Nov 2016 11:30:07 GMT

{
"code": 422,
"issues": {
"user": [
"required"
]
},
"message": "Document contains error(s)"
}

Another tests:
http :8080/posts access_token==eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VyX2lkIjoiamFjayJ9.i74uQMaftKhwZri0XaTMTnqiBY0cmuMu27Yuv8WUy68
HTTP/1.1 401 Unauthorized
Content-Length: 37
Content-Type: application/json
Date: Sat, 12 Nov 2016 11:35:21 GMT

{
"code": 401,
"message": "Unauthorized"
}

Here is the example console:

go run jwt-auth.go
2016/11/12 18:21:55 Serving API on http://localhost:8080
2016/11/12 18:21:55 jackTokenString==%!(EXTRA string=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VyX2lkIjoiamFjayJ9.i74uQMaftKhwZri0XaTMTnqiBY0cmuMu27Yuv8WUy68)
2016/11/12 18:21:55 johnTokenString==%!(EXTRA string=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VyX2lkIjoiam9obiJ9.yrpLD2gUq_TIyxMKpDBHQO391KGCNrFl-RvMT-p90MU)
2016/11/12 18:21:55 Your token secret is "secret", change it with the `-jwt-secret' flag
2016/11/12 18:21:55 Play with tokens:

  • http :8080/posts access_token==eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VyX2lkIjoiam9obiJ9.yrpLD2gUq_TIyxMKpDBHQO391KGCNrFl-RvMT-p90MU title="John's post"

  • http :8080/posts access_token==eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VyX2lkIjoiam9obiJ9.yrpLD2gUq_TIyxMKpDBHQO391KGCNrFl-RvMT-p90MU

  • http :8080/posts

  • http :8080/posts access_token==eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VyX2lkIjoiamFjayJ9.i74uQMaftKhwZri0XaTMTnqiBY0cmuMu27Yuv8WUy68 title="Jack's post"

  • http :8080/posts access_token==eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VyX2lkIjoiamFjayJ9.i74uQMaftKhwZri0XaTMTnqiBY0cmuMu27Yuv8WUy68
    value:
    value:
    value:
    value:
    value:
    value:

Support for GridFS

hi @rs,

are there any plans to implement gridfs and use it like it can be used in python-eve (schema validator "media" for example)?

jsonschema testing: custom validators where never run!

While working on #88, I discovered that custom validators where never run by the test runner.

UPDATE: unittests for URL and Object have previously silently failed. The failure where due to test issues rather than implementation issues.

At least one of these fields is required

I'm trying to figure out a way to express a schema where I have a set of fields and at least one of them must be present to validate the JSON. AnyOf is for a specific Field. I'm thinking I need to either:

  • modify schema to support - at least one of
  • look at the map we return and check it's length - if it's zero then invariant is false

Am I missing something? Thanks

Feature request: have total items in response

Hi,

as previously discussed in a PR. I would like to ask how and when do we implement the feature of having Total items being displayed in the response? @rs said it should be explicitly requested, so my question is, are there any plans how to do it?

Content-type

Congratulations for this beautiful initiative. Plenty of potential, very opinionated but I personally really like the "schema" and "fields" choices, it seems to fit a very large scope and be easily extended to the developer's needs.

However, after just playing a few minutes in order to evaluate if this would really ease recoding of my website from php to go, I just tried the "users/posts" example and found that it does not seem to work with 'content-type:application/x-www-form-urlencoded' ... is this because of some settings I am not aware of yet, or did you choose to force the rest api to only be accessible through 'application/json' requests ?

Suggestion: you created useful "schema.Time", "schema.String" for instance, why not "schema.Email" and "schema.PhoneNumber" ?

By the way, this is not a real issue but I didn't know how to contact you otherwise. Does rest-layer "framework" play well with google appengine datastore and go-kit?

Is there a way to customize error messages sent for invalid data? (in order to handle multi language error dictionary for instance that generates the error message depending on language/country and error code/source of error)

[jsonschema] Support custom Validators

One of the strengths of the rest-later/schema package, is that users can define their own Validators. However, doing so would lead to their validators not being handled by the schema/encoding/jsonschema package at the moment.

Solution 1: Marshaler interface

One solution would be to add an interface in the schema/encoding/jsonschema package similar to what one would find in the encoding/json package from the standard libs.

type Marshaler interface {
        MarshalJSONSchema() ([]byte, error)
}

One could also chose to be a bit different, and go for an interface that allows directly passing in the writer to use:

type Marshaler interface {
        MarshalJSONSchema(w io.writer) error
}

Either way, the jsonschema package should test if this interface is implemented for any given validator, and prefer to use it if it is.

The second option might be simplest in terms of allowing users to extend existing validators. E.g. someone might write something like:

type SizedDict struct{
        schema.Dict
        MaxValues, MinValues int
}

func (sd SizedDict) Validate(value interface{}) (interface{}, error) {
        value, err := sd.d.Validate(value)
         if err != nil {
                  return nil, err
         }
         ...
}

func (sd SizedDict) MarshalJSONSchema(w io.Writer) error {
        enc = jsonschema.NewEncoder(w)
        if err := enc.Encode(sd.d); err != nil {
                return err
        }
        ...
}

If the first option is chosen for the interface, it might be beneficial to define an equivalent to json.Marshal in the jsonschema package to simplify user extensions.

Validate schema.Reference when embedded in nested FieldValidators

Hi,

I have the following schema:

var Roles = schema.Schema{
	Fields: schema.Fields{
		"id":      schema.IDField,
		"created": schema.CreatedField,
		"updated": schema.UpdatedField,

		// fields
		"name": {
			Required:   true,
			Filterable: true,
			Sortable:   true,
			Validator: &schema.String{
				MinLen: 1,
				MaxLen: 63,
			},
		},
		"description": {
			Filterable: true,
			Sortable:   true,
			Validator: &schema.String{
				MaxLen: 63,
			},
		},
		"usernames": {
			Validator: &schema.Reference{
				Path: "users",
			},
		}
}

I would like to have usernames as an array of references. but I want to be able to pass the user's name instead of its IDField. something like:

pseudo POST request:
...
"usernames": ["omani", "rs"]
...

how do I do this? I guess there is no support for this in the Validator and I have to do this in a hook?

`OnGet()` event hook not firing

Steps to reproduce:

  1. Create a simple resource endpoint
  2. Attach OnGet and OnFind event hooks with a simple log message (see code below)
  3. Post a new simple resource (http post :8080/simple name="testing")
  4. Do a GET query on the new item (http get :8080/simple/id-from-above)

Expected Results:

See log entry for both OnGet() and OnFind() hooks.

Actual Results:

Only OnFind() is fired, not OnGet()

Sample Code:

package main

import (
	"context"
	"log"
	"net/http"

	"github.com/justinas/alice"

	mem "github.com/rs/rest-layer-mem"
	"github.com/rs/rest-layer/resource"
	"github.com/rs/rest-layer/rest"
	"github.com/rs/rest-layer/schema"
)

var (
	SimpleSchema = schema.Schema{
		Description: `A simple object`,
		Fields: schema.Fields{
			"id": schema.IDField,
			"name": {
				Required:   true,
				Filterable: true,
				Validator: &schema.String{
					MaxLen: 150,
				},
			},
		},
	}
)

type SimpleHook struct{}

func (h SimpleHook) OnFind(ctx context.Context, lookup *resource.Lookup, offset, limit int) error {
	log.Println("SimpleHook.OnFind()")
	return nil
}

func (h SimpleHook) OnGet(ctx context.Context, id interface{}) error {
	log.Println("SimpleHook.OnGet()")
	return nil
}

func main() {
	resource.LoggerLevel = resource.LogLevelDebug

	index := resource.NewIndex()

	simpleHandler := mem.NewHandler()
	simpleResource := index.Bind("simple", SimpleSchema, simpleHandler, resource.Conf{
		AllowedModes: resource.ReadWrite,
	})
	err := simpleResource.Use(SimpleHook{})
	if err != nil {
		log.Fatalf("Error protecting simple resource: %s", err)
	}

	api, err := rest.NewHandler(index)
	if err != nil {
		log.Fatalf("Invalid API configuration: %s", err)
	}

	c := alice.New()
	http.Handle("/", c.Then(api))

	log.Print("Serving API on http://localhost:8080")
	if err := http.ListenAndServe(":8080", nil); err != nil {
		log.Fatal(err)
	}
}

Redesign schema package to allow any FieldValidator at the top level

UPDATED on 2018-09-05. Originally this was a question between the difference of the schema.Object FieldValidator and the Schema paramter on Field, and it evolved from there.

Background

Today there are two ways to specify a schema:

s := schema.Schema{
        Fields: schema.Fields{
                "meta": {Schema: subSchema}
        }
}

and

s := schema.Schema{
        Fields: schema.Fields{
                "meta": {Validator: schema.Object{Schema: subSchema}}
        }
}

These are equivalent in principal, but in reality, the code supports them differently.

E.g. for validating and correctly setting e.g. read-only values, only the first method will work.

The second way of expressing this, the only one supported by the JSON-schmea encoding package, is the only syntax that allowed Schema validation within an Array, AnyOff, AllOf or other nested structure in the past.

Proposal

This ticket suggests a redesign of the schema package so that:

  • Allow any FieldValidator to be used in a top-level schema (renamed to just schema.Validator in example).
  • Merge the schema.Schema and schema.Field types into one
  • Fields is moved from schema to Object (and is now a map of Schemas)
  • Move the Required Field attribute (does not make sense on all fields) to be a list-attribute on Object.

Example struct definitions:

type Object struct{
    Fields   map[string]Schema
    Required []string
}

type Object struct{
    Fields   map[string]Schema
    Required []string
}

type Schema struct{
    Title       string
    Description string
    ReadOnly    bool
    Type        Validator
}

type Array struct{
    KeysValidator Validator
    Values        Schema
}

Tag a release / add Glide dependency manifest for related projects.

An example of a change that could introduce a version mismatch between rest-layer and supporting libraries for end-user, is #62.

Minimum solution

To allow end-users to better cope with minor breaking changes before a rest-layer v1.0, and to improve the ability end-users have for reproducibility, I propose to as a minimum tag a release v0.1.0 following semver conventions.

How to handle breaking changes

According to the semver spec: item 4, versions 0.y.z are for initial development, and anything may break at any time. However, I suggest that for a breaking change pre v1.0.0, MINOR (y) needs to be incremented. I.e. there should be no breaking changes in a PATCH (z) release.

For additional stability only

To futher allow people to minimize the risk of unpredictable behaviour due to version mismatch, I suggest that supporting repositories such as rest-layer-mongo etc., gets a manifest file describing which rest-layer version they require. I suggest this version number to be locked to MAJOR and MINOR pre v1.0.0. E.g. with Glide syntax:

import:
- package: github.com/rs/rest-layer
  version: ~0.1

Until an official Go manifest format is defined, I suggest relying on a Glide manifest.

EDIT: Corrected some mistakes, including referring to a lock of MAJOR and PATCH.
EDIT 2: Added headings.

Describing Arrays of Objects?

Hello, I'm struggling at the moment in trying to describe Arrays of 'object'. It's not obvious looking at the primitives how to do that. I have experimented with setting a schema.Fields with a Validator for type schema.Array and then using the Field.Schema to describe the object, but I'm not sure this would validate correctly. Ideally, what would be interesting is if I could set a schema.Schema on an Array, so one could see the hierarchy.

Would there be interest in taking a patch for Array that would let me nest a Schema within an Array. Array itself doesn't let you map key validation to value validation in an obvious way. perhaps this is a documentation issue?

Thanks for any guidance.

[jsonschema] Table driven tests

Hi.

I am interested in contributing some table driven tests for the jsonschema package. I am picturing an array of schema definitions that can be compared to a JSON string via asserts.JSONEq. I would expect my progress to be a bit slow, as it's a busy period for me familywise.

@rs, is depending on Go 1.7 sub-tests (inside the test package only) OK, or should I write my tests so they work for Go 1.6?

@yanfali, I though I would mention your name as well so you get notified ;-)

Provide: mongo's $regex query parameter

It would be awesome to be able to query a value of a field by regex.

Since the format of the parameter is inspired by the mongodb query format we could also have its $regex in queries.

JSONSchema required

When schema is embedded within an Array or an Object Field the required field is being added to the "items" or "properties" body. This should appear one level up at the same level as "type"
Reported by @smyrman

Example of incorrect output

        "items": {
          "properties": {
            "key": {
              "type": "string"
            },
            "score": {
              "type": "integer"
            },
            "required": [
              "score",
              "key",
              "value"
            ],
            "value": {
              "type": "string"
            }
          },
          "type": "object"
        },

Example of correct output

        "items": {
          "properties": {
            "key": {
              "type": "string"
            },
            "score": {
              "type": "integer"
            },

            "value": {
              "type": "string"
            }
          },
          "required": [
            "score",
            "key",
            "value"
          ],
          "type": "object"
        },

Question : How do you handle pages for different routes

Here's the scenario :
Consider i want to have a path called "/Pages" and i want to attach it to multiple other routes so how do i create reference so i can use a single Page type and use it for different routes as below


/foo/pages/...
/bar/pages/...
/foobar/pages/...

I do want to use the same type called "Page" instead of defining it again and again for different routes.

rest-layer-mem

Good evening,

just a rapid verification: I just came back from vacation and wanted to go back to the api I am writing, trying to play with rest-layer to evaluate it.

Using rest-layer-men (I have not plugged a real database yet), I can post 10 times to the same resource, with the exact same data and 10 identical items will be created ... Do you completely defer the verification of the existence of a matching item to the "Storer" ? Therefor for each implementation (for example if I choose to store the data in a mongoldb or appengine) I should create all the methods to satisfy the Storer interface, including the verification of a matching item after a POST request before inserting it in the db. Am I right ?

Also, It is not obvious how to access to a resource item. /users/blablabla for example will get the users with which field of the schema matching blablabla? must it be named "id"? can we choose another name (for example userid, pseudo, alias or anything else that we can configure as the identifier) ? Because the automatically generated id (returned after a successful POST) might not be very aesthetic.

I am also willing to store some data, with basic mandatory information (example: user's id, password, email and name), a boolean (detailed) specifying if the optional fields have been provided and the values for the optional fields. For example, in a "details" subschema, I'd like the fields (gender, age, city) to be mandatory if detailed==true. This could occur in a 2step sign-in for example, letting the user choose if he wants to fill the optional information, but if he chooses to store the details then ALL of it should be provided.
If detailed==false those fields are optional, if detailed==true those fields are required. Is there a way to handle this without storing optional details and basic info in 2 independent resources ???? Or maybe just by putting this verification in the POST request handler Itself ...

New (optional) interface in the schema package to support Swagger 2.0 generation?

Looking on what's missing to successfully gennerate Swagger 2.0 specification from a resource.Index. Most of it, likes paths, tags etc., seems trivial enough to generate from index parameters, or as a stop-gap, hardcode in the application.

However for the definitions object, I feel there is a missing piece in the schema package to do a successful auto-translation to a swagger Schema. Perhaps if there was an extra (optional) interface for Validators that was implemented by all the standard types, that could help? Perhaps something like:

type Swagger interface {
    SwaggerParams() map[string]interface{}
}

For simple types, like bool, it probably only need to return "type", while for e.g. an integer with boundaries, it should return "maxLength", "minLength", and a "format" (if defined).

Demo does not work

hi,

I am trying to run the demo and get the following errors:

# command-line-arguments
examples/demo/main.go:157: cannot use xlog.NewHandler(xlog.Config literal) (type func(xhandler.HandlerC) xhandler.HandlerC) as type alice.Constructor in argument to c.Append
examples/demo/main.go:164: cannot use xaccess.NewHandler() (type func(xhandler.HandlerC) xhandler.HandlerC) as type alice.Constructor in argument to c.Append

any hints?

Documentation update Validator

So I recently discovered a little quirk about interfaces which wasn't obvious to me as a user of rest-layer/schema. When setting Field { Validator } the schema FieldValidators must be pointers otherwise the interface detection for Compile() does not work as expected. Interestingly the detection for the FieldValidator interface does. I'm not good enough at understanding go interfaces to understand why this is the case, but it would be good to document this someplace as it can lead to FieldValidator Compile functions, not being executed.

It could be related to this from fields.go, but I'm not strong enough at go's type system to understand exactly why.

                if c, ok := f.Validator.(Compiler); ok {
                        if err := c.Compile(); err != nil {
                                return fmt.Errorf(": %v", err)
                        }
                }

Provide: multipart/form-data POSTs

hi,

I am writing a mongo layer for gridfs and I wonder if it is possible to have multiform-data requests for writing to a file (with a POST /resourcename/item).

Can we have an option to accept content types other than application/json?

are there any plans to implement this?

Bug: `schema.Array{}` returning "gob type not registered for interface error" using memory storage

I'm having issues getting an array field to work as expected. Any help would be greatly appreciated. I've tried a number of different ValuesValidators and get similar results with each.

Steps to reproduce:

  1. Create a simple schema with a &schema.Array{} field
  2. Submit a POST with a list of values into the list field

Expected Outcome:

Entity saved correctly

Actual Outcome:

Error: "gob: type not registered for interface: []interface {}"

Sample Code:

package main

import (
	"log"
	"net/http"

	"github.com/justinas/alice"

	mem "github.com/rs/rest-layer-mem"
	"github.com/rs/rest-layer/resource"
	"github.com/rs/rest-layer/rest"
	"github.com/rs/rest-layer/schema"
)

var (
	fooSchema = schema.Schema{
		Description: `A foo object`,
		Fields: schema.Fields{
			"id": schema.IDField,
			"name": {
				Required:   true,
				Filterable: true,
				Validator: &schema.String{
					MaxLen: 150,
				},
			},
			"list": {
				Validator: &schema.Array{
					ValuesValidator: &schema.String{},
				},
			},
		},
	}
)

func main() {
	resource.LoggerLevel = resource.LogLevelDebug

	index := resource.NewIndex()

	fooHandler := mem.NewHandler()
	_ = index.Bind("foo", fooSchema, fooHandler, resource.Conf{
		AllowedModes: resource.ReadWrite,
	})

	api, err := rest.NewHandler(index)
	if err != nil {
		log.Fatalf("Invalid API configuration: %s", err)
	}

	c := alice.New()
	http.Handle("/", c.Then(api))

	log.Print("Serving API on http://localhost:8080")
	if err := http.ListenAndServe(":8080", nil); err != nil {
		log.Fatal(err)
	}
}

Sample of error:

$ http post :8080/foo name="test" list:='["a","b"]'
HTTP/1.1 520 status code 520
Content-Length: 79
Content-Type: application/json
Date: Tue, 03 Jan 2017 21:17:32 GMT

{
    "code": 520,
    "message": "gob: type not registered for interface: []interface {}"
}

questions

Hi, I was investigating restful framework written with go for our new product and found this great one. Thanks for your effort!

It seems to me this framework is in good enough shape to use for real product in production.(?) But I still have several concerns:

  1. What's the roadmap for sql, specifically mysql, support? I can create my own handlers for now, but it seems I also need to translate the lookup struct to mysql.
  2. This framework seems to be created originally for nosql db, e.g. the concept of document, field selection etc and using etags to do concurrency control in db(mysql transaction naturally does the trick). Do I need to worry about any performance issue if using sql database?

Question: Getting another document from a hook

Hello there.

I would like to ask your advice in the following task.

Before adding a new document to a MongoDB (via POST method) I need to calculate some of its fields' values. In order to do that I need to get another two documents from another collection (all collections are handled by the same app).

I seems to me that InsertEventHandler hook will come in handy here. But it's not clear to me how use it properly in that task. Might it be the http.NewRequest ?

Thanks in advance.

Set field value a a variable depending on another fields value

For example on a schema I could have fields such as "city"=city and "business-name"=the_name. And use those to build a url friendly id and access the page via domain.com/business/the_name_city.

What would be the easiest way to do it?

I don't really understand how to access the data in it's "storable" version once compiled from the schema ...

Schema to JSONSchema

Is there any interest in auto-generating JSONSchema from schema? I have some stuff I would be willing to contribute upstream; it's incomplete and only covers by own use cases, but could easily be extended to handle more over time.

ElasticSearch support ETA ?

Hi @rs, thank you for your amazing work,

Do you have any ETA concerning the support of ElasticSearch as a datastore backend ?

[jsonschema] Support for schema.Dict

As suggested by @yanfali, I am adding tickets for how to extend the initial JSON Schema support to support more types. schema.Dict will get it's own ticket, as it's tricky enough to deserve so.

Forward

schema.Dict should be encoded to the same JSON type as schema.Object (object), but rely on additionalProperties when a KeysValidator is not set or patternProperties if KeyValidator is set.

According to the validaiton spec section 5.4.4, aditionalProperties can either be a boolen, in which case any value is allowed when it's true, or a JSON Schema, in which case the values must apply to that schema. patternProperties must be an object with regex as keys, and JSON Schemas as values.

However, as far as I can see from the examples, as well as from the Core/Validation Meta-Schema, there are no required elements in a JSON Schema object, which means it can be left emty in order to allow for all types (similar to the empty interface{} in go ).

Expected result

This really cumulates to us wanting to end up with the following.

If neitherKeyValidator nor ValueValidator is set:

"type": "object",
"additionalProperties": true

If only ValueValidator is set:

"type": "object",
"additionalProperties": {/* JSON Schema from validatorToJSONSchema(ValuesValidator) */}

If only KeyValidator is set:

"type": "object",
"patternProperties": {
    "<regex from KeyValidator>": {}
}

If both KeyValidator and ValueValidator is set

"type": "object",
"patternProperties": {
    "<regex from KeyValidator>": {/* JSON Schema from validatorToJSONSchema(ValuesValidator) */}
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    šŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. šŸ“ŠšŸ“ˆšŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ā¤ļø Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.