Giter VIP home page Giter VIP logo

proteus's Introduction

Proteus

Go Report Card Sourcegraph PkgGoDev

A simple tool for generating an application's data access layer.

Purpose

Proteus makes your SQL queries type-safe and prevents SQL injection attacks. It processes structs with struct tags on function fields to generate Go functions at runtime. These functions map input parameters to SQL query parameters and optionally map the output parameters to the output of your SQL queries.

In addition to being type-safe, Proteus also prevents SQL injection by generating prepared statements from your SQL queries. Even dynamic in clauses are converted into injection-proof prepared statements.

Proteus is not an ORM; it does not generate SQL. It just automates away the boring parts of interacting with databases in Go.

Quick Start

  1. Define a struct that contains function fields and tags to indicate the query and the parameter names:
type ProductDaoCtx struct {
	FindByID                      func(ctx context.Context, q proteus.ContextQuerier, id int) (Product, error)                                     `proq:"select * from Product where id = :id:" prop:"id"`
	Update                        func(ctx context.Context, e proteus.ContextExecutor, p Product) (int64, error)                                   `proq:"update Product set name = :p.Name:, cost = :p.Cost: where id = :p.Id:" prop:"p"`
	FindByNameAndCost             func(ctx context.Context, q proteus.ContextQuerier, name string, cost float64) ([]Product, error)                `proq:"select * from Product where name=:name: and cost=:cost:" prop:"name,cost"`
	FindByIDMap                   func(ctx context.Context, q proteus.ContextQuerier, id int) (map[string]interface{}, error)                      `proq:"select * from Product where id = :id:" prop:"id"`
	UpdateMap                     func(ctx context.Context, e proteus.ContextExecutor, p map[string]interface{}) (int64, error)                    `proq:"update Product set name = :p.Name:, cost = :p.Cost: where id = :p.Id:" prop:"p"`
	FindByNameAndCostMap          func(ctx context.Context, q proteus.ContextQuerier, name string, cost float64) ([]map[string]interface{}, error) `proq:"select * from Product where name=:name: and cost=:cost:" prop:"name,cost"`
	Insert                        func(ctx context.Context, e proteus.ContextExecutor, id int, name string, cost *float64) (int64, error)          `proq:"insert into product(id, name, cost) values(:id:, :name:, :cost:)" prop:"id,name,cost"`
	FindByIDSlice                 func(ctx context.Context, q proteus.ContextQuerier, ids []int) ([]Product, error)                                `proq:"select * from Product where id in (:ids:)" prop:"ids"`
	FindByIDSliceAndName          func(ctx context.Context, q proteus.ContextQuerier, ids []int, name string) ([]Product, error)                   `proq:"select * from Product where name = :name: and id in (:ids:)" prop:"ids,name"`
	FindByIDSliceNameAndCost      func(ctx context.Context, q proteus.ContextQuerier, ids []int, name string, cost *float64) ([]Product, error)    `proq:"select * from Product where name = :name: and id in (:ids:) and (cost is null or cost = :cost:)" prop:"ids,name,cost"`
	FindByIDSliceCostAndNameSlice func(ctx context.Context, q proteus.ContextQuerier, ids []int, names []string, cost *float64) ([]Product, error) `proq:"select * from Product where id in (:ids:) and (cost is null or cost = :cost:) and name in (:names:)" prop:"ids,names,cost"`
	FindByNameAndCostUnlabeled    func(ctx context.Context, q proteus.ContextQuerier, name string, cost float64) ([]Product, error)                `proq:"select * from Product where name=:$1: and cost=:$2:"`
}

The first input parameter is of type context.Context, and the second input parameter is of type proteus.ContextExecutor or proteus.ContextQuerier:

// ContextQuerier defines the interface of a type that runs a SQL query with a context
type ContextQuerier interface {
	// QueryContext executes a query that returns rows, typically a SELECT.
	// The args are for any placeholder parameters in the query.
	QueryContext(ctx context.Context, query string, args ...interface{}) (*sql.Rows, error)
}
// ContextExecutor defines the interface of a type that runs a SQL exec with a context
type ContextExecutor interface {
	// ExecContext executes a query without returning any rows.
	// The args are for any placeholder parameters in the query.
	ExecContext(ctx context.Context, query string, args ...interface{}) (sql.Result, error)
}

The remaining input parameters can be primitives, structs, maps of string to interface{}, or slices.

For queries, return types can be:

  • empty
  • a single value being returned (a primitive, struct, or map of string to interface{})
  • a single value that's a slice of primitive, struct, or a map of string to interface{}
  • a primitive, struct, or map of string to interface{} and an error
  • a slice of primitive, struct, or a map of string to interface{} and an error

For insert/updates, return types can be:

  • empty
  • an int64 that indicates the number of rows affected
  • a sql.Result
  • an int64 that indicates the number of rows affected and an error
  • a sql.Result and an error

The proq struct tag stores the query. You place variable substitutions between : s. Proteus allows you to refer to fields in maps and structs, as well as elements in arrays or slices using . as a path separator. If you have a struct like this:

type Person struct {
    Name string
    Address Address
    Pets []Pet
} 

type Pet struct {
    Name string
    Species string
}

type Address struct {
    Street string
    City string
    State string
}

You can write a query like this:

insert into person(name, city, pet1_name, pet2_name) values (:p.Name:, :p.Address.City:, :p.Pets.0.Name:, :p.Pets.1.Name:)

Note that the index for an array or slice must be an int literal and the key for a map must be a string.

2. If you want to map response fields to a struct, define a struct with struct tags to indicate the mapping:

type Product struct {
	Id   int     `prof:"id"`
	Name string  `prof:"name"`
	Cost float64 `prof:"cost"`
}

3. Pass an instance of the Dao struct to the proteus.ShouldBuild function:

var productDao = ProductDao{}

func init() {
	err := proteus.ShouldBuild(context.Background(), &productDao, proteus.Sqlite)
	if err != nil {
		panic(err)
	}
}

4. Open a connection to a SQL database:

	db := setupDb()
	defer db.Close()

5. Make calls to the function fields in your Proteus-populated struct:

    ctx := context.Background()
	fmt.Println(productDao.FindById(ctx, db, 10))
	p := Product{10, "Thingie", 56.23}
	fmt.Println(productDao.Update(ctx, db, p))
	fmt.Println(productDao.FindById(ctx, db, 10))
	fmt.Println(productDao.FindByNameAndCost(ctx, db, "fred", 54.10))
	fmt.Println(productDao.FindByNameAndCost(ctx, db, "Thingie", 56.23))

	//using a map of [string]interface{} works too!
	fmt.Println(productDao.FindByIdMap(ctx, db, 10))
	fmt.Println(productDao.FindByNameAndCostMap(ctx, db, "Thingie", 56.23))

	fmt.Println(productDao.FindById(ctx, db, 11))
	m := map[string]interface{}{
		"Id":   11,
		"Name": "bobbo",
		"Cost": 12.94,
	}
	fmt.Println(productDao.UpdateMap(ctx, db, m))
	fmt.Println(productDao.FindById(ctx, db, 11))

	fmt.Println(productDao.FindByIDSlice(ctx, db, []int{1, 3, 5}))
	fmt.Println(productDao.FindByIDSliceAndName(ctx, db, []int{1, 3, 5}, "person1"))
	fmt.Println(productDao.FindByIDSliceNameAndCost(ctx, db, []int{1, 3, 5}, "person3", nil))
	fmt.Println(productDao.FindByIDSliceCostAndNameSlice(ctx, db, []int{1, 3, 5}, []string{"person3", "person5"}, nil))

Proteus without the context

If you are using an older database driver that does not work with the context.Context, Proteus provides support for them as well:

type ProductDao struct {
	FindByID                      func(q proteus.Querier, id int) (Product, error)                                     `proq:"select * from Product where id = :id:" prop:"id"`
	Update                        func(e proteus.Executor, p Product) (int64, error)                                   `proq:"update Product set name = :p.Name:, cost = :p.Cost: where id = :p.Id:" prop:"p"`
	FindByNameAndCost             func(q proteus.Querier, name string, cost float64) ([]Product, error)                `proq:"select * from Product where name=:name: and cost=:cost:" prop:"name,cost"`
	FindByIDMap                   func(q proteus.Querier, id int) (map[string]interface{}, error)                      `proq:"select * from Product where id = :id:" prop:"id"`
	UpdateMap                     func(e proteus.Executor, p map[string]interface{}) (int64, error)                    `proq:"update Product set name = :p.Name:, cost = :p.Cost: where id = :p.Id:" prop:"p"`
	FindByNameAndCostMap          func(q proteus.Querier, name string, cost float64) ([]map[string]interface{}, error) `proq:"select * from Product where name=:name: and cost=:cost:" prop:"name,cost"`
	Insert                        func(e proteus.Executor, id int, name string, cost *float64) (int64, error)          `proq:"insert into product(id, name, cost) values(:id:, :name:, :cost:)" prop:"id,name,cost"`
	FindByIDSlice                 func(q proteus.Querier, ids []int) ([]Product, error)                                `proq:"select * from Product where id in (:ids:)" prop:"ids"`
	FindByIDSliceAndName          func(q proteus.Querier, ids []int, name string) ([]Product, error)                   `proq:"select * from Product where name = :name: and id in (:ids:)" prop:"ids,name"`
	FindByIDSliceNameAndCost      func(q proteus.Querier, ids []int, name string, cost *float64) ([]Product, error)    `proq:"select * from Product where name = :name: and id in (:ids:) and (cost is null or cost = :cost:)" prop:"ids,name,cost"`
	FindByIDSliceCostAndNameSlice func(q proteus.Querier, ids []int, names []string, cost *float64) ([]Product, error) `proq:"select * from Product where id in (:ids:) and (cost is null or cost = :cost:) and name in (:names:)" prop:"ids,names,cost"`
	FindByNameAndCostUnlabeled    func(q proteus.Querier, name string, cost float64) ([]Product, error)                `proq:"select * from Product where name=:$1: and cost=:$2:"`
}

In this case, the first input parameter is either of type proteus.Executor or proteus.Querier:

// Executor runs queries that modify the data store.
type Executor interface {
	// Exec executes a query without returning any rows.
	// The args are for any placeholder parameters in the query.
	Exec(query string, args ...interface{}) (sql.Result, error)
}
// Querier runs queries that return Rows from the data store
type Querier interface {
	// Query executes a query that returns rows, typically a SELECT.
	// The args are for any placeholder parameters in the query.
	Query(query string, args ...interface{}) (*sql.Rows, error)
}

Use proteus.Build to generate your DAO functions:

var productDao = ProductDao{}

func init() {
	err := proteus.Build(&productDao, proteus.Sqlite)
	if err != nil {
		panic(err)
	}
}

Struct Tags

Proteus generates implementations of DAO functions by examining struct tags and parameters types on function fields in a struct. The following are the recognized struct tags:

  • proq - The query. Returns single entity or list of entities
  • prop - The parameter names. Should be in the order of the function parameters (skipping over the first Executor or Querier parameter)

The prop struct tag is optional. If it is not supplied, the query must contain positional parameters ($1, $2, etc.) instead of named parameters. For example:

type ProductDaoS struct {
	FindById             func(ctx context.Context, q proteus.ContextQuerier, id int) (Product, error)                      `proq:"select * from Product where id = :$1:"`
	Update               func(ctx context.Context, e proteus.ContextExecutor, p Product) (int64, error)                    `proq:"update Product set name = :$1.Name:, cost = :$1.Cost: where id = :$1.Id:"`
	FindByNameAndCost    func(ctx context.Context, q proteus.ContextQuerier, name string, cost float64) ([]Product, error) `proq:"select * from Product where name=:$1: and cost=:$2:"`
}

If you want to map the output of a DAO with a proq tag to a struct, then create a struct and put the following struct tag on each field that you want to map to a value in the output:

  • prof - The fields on the dto that are mapped to select parameters in a query

Storing queries outside of struct tags

Struct tags are cumbersome for all but the shortest queries. In order to allow a more natural way to store longer queries, one or more instances of the proteus.QueryMapper interface can be passed into the proteus.Build function. In order to reference a query stored in an proteus.QueryMapper, you should put q:name as the value of the proq struct tag, where name is the name for the query.

For example:

	m := proteus.MapMapper{
		"q1": "select * from foo where id = :id:",
		"q2": "update foo set x=:x: where id = :id:",
	}

	type s struct {
		GetF   func(ctx context.Context, q proteus.ContextQuerier, id string) (f, error)                `proq:"q:q1" prop:"id"`
		Update func(ctx context.Context, e proteus.ContextExecutor, id string, x string) (int64, error) `proq:"q:q2" prop:"id,x"`
	}
	
	sImpl := s{}
	err := proteus.ShouldBuild(context.Background(), &sImpl, proteus.Sqlite, m)

Out of the box, you can use either a map[string]string or a unix properties file to store your queries. In order to use a map[string]string, cast your map to proteus.MapMapper (or just declare your variable to be of type proteus.MapMapper). To use a properties file, call the method proteus.PropFileToQueryMapper with the name of the property file that contains your queries.

Generating function variables

Some people don't want to use structs and struct tags to implement their SQL mapping layer. Starting with version 0.11.0, Proteus can also generate functions that aren't fields in a struct.

First, create an instance of a proteus.Builder. The factory function takes a proteus.Adapter and zero or more proteus.QueryMapper instances:

    b := NewBuilder(Postgres)

Next, declare a function variable with the signature you want. The parameters for the function variables follow the same rules as the function fields:

    var f func(c context.Context, e ContextExecutor, name string, age int) (int64, error)
    var g func(c context.Context, q ContextQuerier, id int) (*Person, error)

Then call the BuildFunction method on your proteus.Builder instance, passing in a pointer to your function variable, the SQL query (or a query mapper reference), and the parameter names as a string slice:

    err := b.BuildFunction(ctx, &f, "INSERT INTO PERSON(name, age) VALUES(:name:, :age:)", []string{"name", "age"})
    if err != nil {
        t.Fatalf("build function failed: %v", err)
    }
    
    err = b.BuildFunction(ctx, &g, "SELECT * FROM PERSON WHERE id = :id:", []string{"id"})
    if err != nil {
        t.Fatalf("build function 2 failed: %v", err)
    }

Finally, call your functions, to run your SQL queries:

    db := setupDbPostgres()
    defer db.Close()
    ctx := context.Background()
    
    rows, err := f(ctx, db, "Fred", 20)
    if err != nil {
        t.Fatalf("create failed: %v", err)
    }
    fmt.Println(rows) // prints 1
    
    p, err := g(ctx, db, 1)
    if err != nil {
        t.Fatalf("get failed: %v", err)
    }
    fmt.Println(p) // prints {1, Fred, 20}

Ad-hoc database queries

While Proteus is focused on type safety, sometimes you just want to run a query without associating it with a function. Starting with version 0.11.0, Proteus allows you to run ad-hoc database queries.

First, create an instance of a proteus.Builder. The factory function takes a proteus.Adapter and zero or more proteus.QueryMapper instances:

    b := NewBuilder(Postgres)

Next, run your query by passing it to the Exec, ExecResult, or Query methods on proteus.Builder.

Exec expects a context.Context, a proteus.ContextExecutor, the query, and a map of string to interface{}, where the keys are the parameter names and the values are the parameter values. It returns an int64 with the number of rows modified and and error.

ExecResult expects a context.Context, a proteus.ContextExecutor, the query, and a map of string to interface{}, where the keys are the parameter names and the values are the parameter values. It returns a sql.Result with the number of rows modified and and error.

Query expected a context.Context, a proteus.ContextQuerier, the query, a map of string to interface{}, where the keys are the parameter names and the values are the parameter values, and a pointer to the value that should be populated by the query. The method returns an error.

    db := setupDbPostgres()
    defer db.Close()
    ctx := context.Background()

    rows, err := b.Exec(c, db, "INSERT INTO PERSON(name, age) VALUES(:name:, :age:)", map[string]interface{}{"name": "Fred", "age": 20})
    if err != nil {
        t.Fatalf("create failed: %v", err)
    } 
    fmt.Println(rows) // prints 1

    var p *Person
    err = b.Query(c, db, "SELECT * FROM PERSON WHERE id = :id:", map[string]interface{}{"id": 1}, &p)
    if err != nil {
        t.Fatalf("get failed: %v", err)
    }
    fmt.Println(*p) // prints {1, Fred, 20}

Ad-hoc queries support all of the functionality of Proteus except for type safety. You can reference queries in proteus.QueryMapper instances, build out dynamic in clauses, extract values from struct instances, and map to a struct with prof tags on its fields.

Valid function signatures

API

FAQ

1. Why doesn't Proteus generate a struct that meets an interface?

Go has some surprising limitations. One of them is that there is no way to use reflection to build a implementation of a method; you can only build implementations of functions. The difference is subtle, but the net result is that you cannot supply an interface to the reflection API and get back something that implements that interface.

Another go limitation is that there is also no way to attach metadata to an interface's method. only struct fields can have associated metadata.

A third limitation is that a go interface is not satisfied by a struct that has fields of function type, even if the names of the functions and the types of the function parameters match an interface.

Given these limitations, Proteus uses structs to hold the generated functions. If you want an interface that describes the functionality provided by the struct, you can do something like this:

type ProductDaoS struct {
	FindById             func(ctx context.Context, q proteus.ContextQuerier, id int) (Product, error)                       `proq:"select * from Product where id = :id:" prop:"id"`
	Update               func(ctx context.Context, e proteus.ContextExecutor, p Product) (int64, error)                     `proq:"update Product set name = :p.Name:, cost = :p.Cost: where id = :p.Id:" prop:"p"`
	FindByNameAndCost    func(ctx context.Context, q proteus.ContextQuerier, name string, cost float64) ([]Product, error)  `proq:"select * from Product where name=:name: and cost=:cost:" prop:"name,cost"`
}

type ProductDao interface {
    FindById(context.Context, proteus.ContextQuerier, int) (Product, error)
    Update(context.Context, proteus.ContextExecutor, Product) (int64, error)
    FindByNameAndCost(context.Context, proteus.ContextQuerier, string, float64) ([]Product, error)
}

type productWrapper struct {
    ProductDaoS
}

func (pw productWrapper) FindById(ctx context.Context, q proteus.ContextQuerier, id int) (Product, error) {
    return pw.ProductDaoS.FindById(ctx, q, id)
}

func (pw productWrapper) Update(ctx context.Context, e proteus.ContextExecutor, p Product) (int64, error) {
    return pw.ProductDaoS.Update(ctx, e, p)
}

func (pw productWrapper) FindByNameAndCost(ctx context.Context, q proteus.ContextQuerier, n string, c float64) ([]Product, error) {
    return pw.ProductDaoS.FindByNameAndCost(ctx, q, n, c)
}

func NewProductDao(ctx context.Context) ProductDao {
    p := ProductDaoS{}
    err := proteus.ShouldBuild(ctx, &p,proteus.Sqlite)
    if err != nil {
        panic(err)
    }
    return productWrapper{p}
}

A future version of Proteus may include a tool that can be used with go generate to automatically create the wrapper and interface.

2. Why do I have to specify the parameter names with a struct tag?

This is another limitation of go's reflection API. The names of parameters are not available at runtime to be inspected, and must be supplied by another way in order to be referenced in a query. If you do not want to use a prop struct tag, you can use positional parameters ($1, $2, etc.) instead.

Logging

In order to avoid tying the client code to one particular implementation, Proteus includes its own logger that can be bridged to any other Go logger.

By default, Proteus will log nothing. The simplest way to change the amount of information logged by Proteus is by calling the function proteus.SetLogLevel. This function takes in a value of type logger.Level.

If you are using the proteus.ShouldBuild function to generate your DAOs, you can supply a logging level by passing in the context returned by the function logger.WithLevel . This value is overridden by the logging level set by proteus.SetLogLevel.

You can also include additional values in the logs by passing in the context returned by the function logger.WithValues. This function adds one or more logger.Pair values to the context.

If you are using the context.Context support in Proteus 0.10.0 and later, the logger level can be supplied in the context passed into the function. This will override any logger level specified when proteus.ShouldBuild was invoked. You can also supply additional logging fields for the function call by using a context returned by the logger.WithValues function.

All Proteus logging output include 3 default fields (in order):

  • time (value is a time.Time in UTC)
  • level (the level of the log as a logger.Level)
  • message (the message supplied with the log

The default logger provided with the Proteus logging package outputs to os.Stdout in a JSON format. The default logger implementation is configurable; you can specify a different io.Writer by using the call:

    logger.Config(logger.DefaultLogger{
        Writer: myWriterImpl,
    })

The format of the output of the DefaultLogger can be configured by supplying a logger.Formatter:

    logger.Config(logger.DefaultLogger{
        Writer: myWriterImpl,
        Formatter: myFormatter,
    })

A helper type logger.FormatterFunc will turn any function with the signature func(vals ...interface{}) string into a logger.Formatter.

The first 6 values passed to the Format method are:

position value
0 "time"
1 a time.Time in UTC
2 "level"
3 a logger.Level
4 "message"
5 the message for the log

If you want to supply your own logger implementation, pass an implementation of logger.Logger into logger.Config. This interface matches the definition used by go-kit. The default parameters are in the order specified above. There is a logger.LoggerFunc helper type to convert any function with the signature of func(vals ...interface{}) error into a logger.Logger.

Feel free to use this logger within your own code. If this logger proves to be useful, it might be broken into its own top-level package.

Future Directions

There are more interesting features coming to Proteus. They are (in likely order of implementation):

  • Build in clauses using a field from a slice of struct or map

  • Generate batch values clauses using a slice of struct or map

  • more expansive performance measurement support and per-request logging control

  • go generate tool to create a wrapper struct and interface for a Proteus DAO

  • API implementations for nonSQL data stores and HTTP requests

  • Additional struct tags to generate basic CRUD queries automatically. This is about as far as I'd go in implementing ORM-like features. The goal for Proteus is to be an adapter between a data source and business logic, not to define your entire programming model.

proteus's People

Contributors

dependabot[bot] avatar gitter-badger avatar jonbodner avatar kalexmills avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

proteus's Issues

Table-qualified prof tags

I have a table schema like:

CREATE TABLE table2 (
  "id" BIGSERIAL PRIMARY KEY,
  "uuid" VARCHAR(36) UNIQUE NOT NULL,
  "column2" VARCHAR(24) NOT NULL
);

CREATE TABLE table1 (
  "id" BIGSERIAL PRIMARY KEY,
  "uuid" VARCHAR(36) UNIQUE NOT NULL,
  "ref" VARCHAR(36) REFERENCES table2(uuid)
);

I'd like to do a query like SELECT * from table1 JOIN table2 ON table1.ref = table2.uuid, and have table2 referenced as a child struct. The problem is that both tables have id and uuid columns. Can I qualify the prof tags so that table2 has a prof tag of prof:"table2.uuid" for example? I know I can use "AS" clauses for each column, but this becomes a maintenance nightmare for any sizable number of queries.

Interface as parameter to DAOs

Consider this scenario

type SortOrder string

const (
    ASC SortOrder = "ASC"
    DESC SortOrder = "DESC"
)

type FooDao struct {
    GetBars(ctx context.Context, q ContextQuerier, offset, count int, sortField string, sortOrder SortOrder) ([]Bar, error)
}

I was thinking having something like this would reduce the boilerplate code for may similarly designed DAOs

type Pageable interface {
    Offset() int
    Count() int
    SortField() string
    Order() SortOrder
}

type FooDao struct {
    GetBars(ctx context.Context, q ContextQuerier, p Pageable) ([]Bar, error) `proq:"q:paged_bars"`
}

Then the paged_bars query would be like this

SELECT *
FROM bar
ORDER BY 
CASE WHEN (:$1.SortField(): = 'updatedAt' AND :$1.Order(): = 'ASC') THEN updated_at END ASC
,CASE WHEN (:$1.SortField(): = 'updatedAt' AND :$1.Order(): = 'DESC') THEN updated_at END DESC
,CASE WHEN (:$1.SortField(): = 'age' AND :$1.Order(): = 'ASC') THEN age END ASC
,CASE WHEN (:$1.SortField(): = 'age' AND :$1.Order(): = 'DESC') THEN age END DESC
,CASE WHEN (:$1.SortField(): = 'active' AND :$1.Order(): = 'ASC') THEN active END ASC
,CASE WHEN (:$1.SortField(): = 'active' AND :$1.Order(): = 'DESC') THEN active END DESC
,CASE WHEN (:$1.SortField(): = 'name' AND :$1.Order(): = 'ASC') THEN (firstname || ' ' || lastname) END ASC
,CASE WHEN (:$1.SortField(): = 'name' AND :$1.Order(): = 'DESC') THEN (firstname || ' ' || lastname) END DESC
OFFSET :$1.Offset(): FETCH NEXT :$1.Count(): ROWS ONLY;

So the idea is to be able to pass interface types and call their functions. This is one specific example but there could be many. In my case, the pagination parameters come from different sources that may calculate the offset/size differently.

What's your take?

Add a way to generate a single function

Proteus works by generating function implementations and attaching them to function fields on structs. There's no reason why it can't generate individual functions, too. The signature would look something like:

BuildFunction(f interface{}, query string, params []string) error

f must be a pointer to a function, query is the analog of the proq struct tag, and params is the analog of the prop struct tag.

Add support for ExecContext and QueryContext

The preferred way to support db queries is via the XXXContext methods. Proteus needs to support function fields where the first parameter is a context.Context and the second parameter is an implementation of interface that wraps ExecContext or QueryContext.

If a context.Context is provided, any logging information (level or default key/value pairs) will be used for the request. This allows per-request logging levels.

Logging and metrics

We started using proteus some months ago and released the first service using proteus to production last week. We have added logging outside of proteus, but would really like to have a means to instrument proteus with both logging and metrics. We're happy to submit a PR, but before doing so, I'd like to discuss how this should be implemented. For both logging and metrics, we want the statement as well as the execution time included. It could be implemented with an optional event listener interface the client provides upon configuration of proteus or by using a channel to propagate events on. @jonbodner do you have any thoughts on this?

sql.NullString doesn't work

If I try to query a column of type sql.NullString with no value in it, buildStruct fails with a message of the form:

Expected nil, but got: &errors.errorString{s:"Unable to assign nil value to non-pointer struct field ToAddressLine2 of type sql.NullString"}

Select from list

Is there any way to do a query of the form:

SELECT * from myTable where id IN (:list:) ?

Handle spaces in the parameter list in the 'prop' tag

Currently, parameters must be specified without spaces after commas in the 'prop' tag, i.e.:

prop:"transactionID,userID"

is not the same as

prop:"transactionID, userID"

Trim for spaces after parsing the tag value.

I'll submit a PR shortly with the required change.

Option to return values of type sql.Result to enable harvesting LastInsertId()

The sql.Result interface contains the LastInsertId() function, which can be important information useful for inserts/upserts.

It would be nice if there was a way that Proteus could return this information back to the caller. Perhaps by specifying sql.Result as a valid return type? My alternative right now is to execute something like last_insert_rowid() against my sqlite3 DB, which is prone to race conditions.

P.S. I keep posting 'nice-to-have' issues. Hope that's okay.
P.P.S. I'm highly motivated to raise a PR on this one, if maintainers are open to that.

Support custom Scanner slices (like JSONB columns with Arrays)

This project seems great! While giving it a go with a project I'm doing in which, among a lot of things, I use a JSONB column in which I store an array of elements.

I've been using sqlx for named parameters so far which works well but is getting a bit out of hand which is why this "magic" DAO style seems very interesting.

The problem I'm facing is that the array of elements are a type which in turn is a slice which implement the Scanner interface to marshal as JSON.

Something like this:

type Records []*Record

func(r *Records) Scan(value interface{}) error {
	if row, ok := value.([]byte); ok {
		return json.Unmarshal(row, a)
	}
	return nil
}
func (r Records) Value() (driver.Value, error) {
	return json.Marshal(r)
}

When using proteus it expands the slice into multiple variables and my vague guess is that the issue might be that it checks if it's a slice before it checks if it implements the Scanner interface.

I might be able to write a test and a PR but I'm currently choked so I just wanted to give you a heads up and check if it sounds like the right path first.

Thanks!

Could the current version be tagged

Could you please tag the current version of proteus as it's very different from 0.3.0. By default, godep will take the latest tagged version.

proe tags do not work

The Build function only looks at proq tags in:
query, ok := curField.Tag.Lookup("proq")
If I tag something with proe instead of proq, it will not be converted to a function, and will cause a null pointer exception. This means the examples are probably broken.

Unable to INSERT into []byte column

I have a Dao with a []byte column. When I try to insert, I get
Received unexpected error pq: INSERT has more expressions than target columns

If I remove the reference to the []byte column, things work as requested. Does Proteus support inserting to []byte columns?

racey when benched ?

x-MacBook-Pro:bench apple$ go get github.com/mattn/go-sqlite3
x-MacBook-Pro:bench apple$ go test -bench=.
WARN[0000] skipping function FindById due to error: the 1st output parameter of an Executor must be int64
BenchmarkSelectProteus-4 panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x41a3553]

goroutine 6 [running]:
github.com/jonbodner/proteus/bench.BenchmarkSelectProteus(0xc420012160)
/Users/apple/workspace/go/src/github.com/jonbodner/proteus/bench/bench_test.go:33 +0x2b3
testing.(*B).runN(0xc420012160, 0x1)
/usr/local/opt/go/libexec/src/testing/benchmark.go:140 +0xb2
testing.(*B).run1.func1(0xc420012160)
/usr/local/opt/go/libexec/src/testing/benchmark.go:213 +0x5a
created by testing.(*B).run1
/usr/local/opt/go/libexec/src/testing/benchmark.go:214 +0x7f
exit status 2
FAIL github.com/jonbodner/proteus/bench 0.041s

--

THis SQL driver is buggy.
there is a better one now. See here :)

gobuffalo/buffalo#372

Support for RETURNING "id" queries

I have an INSERT query that creates a new row on an autoincrement table. Is there any way to have a RETURNING "id" suffix to the query and get the new id?

time.Time gets mapped as a nested struct in version 0.7 and later

After the support for nested structs in 0.7 (which is great!), fields of type time.Time are now incorrectly mapped as nested struct types since time.Time is a struct but it does not implement sql.Scanner. This causes all our time.Time fields not to be mapped correctly to our postgres db where we want to map directly to columns of type TIMESTAMP WITH TIMEZONE .

The culprit is the recurse in line 97 in mapper/mapper.go inside function buildColFieldMap:

			//if this is a struct, recurse
			// we are going to use an prof value as a prefix
			// then we go through each of the fields in the struct
			// if any of them aren't structs and have prof tags, we
			// prepend the parent struct and store off a fieldi
			// only if this doesn't implement a scanner. If it does, then go with the scanner
			if sf.Type.Kind() == reflect.Struct && !reflect.PtrTo(sf.Type).Implements(scannerType) {
				buildColFieldMap(sf.Type, childFieldInfo, colFieldMap)
			} else {
				colFieldMap[strings.SplitN(tagVal, ",", 2)[0]] = childFieldInfo
			}

I know mapping of time.Time is very driver specific - maybe add a special case handling this?

Fail loudly when building the DAO

It would be great if we had an option to build a DAO and be certain that all fields are mapped as expected. Right now a warning is written to the logs but the building continues.

I'm thinking something like a proteus.MustBuild() function that returns an error on any encountered error. What du you think about this? Would you be interested in this feature if we provide a PR :)

Allow an external logger

The lib uses logrus for logging warnings and debug messages. It would be great if one could provide the needed logging functions when building the dao. Possibly with logrus as a fallback. Bundling logrus into this lib seems unnecessary to me. The log package from the standard library could be used as well and by this removing the dependency.

Is this something we could do? We can of cause provide a PR for it if so.

Proteus does not properly populated fields in embedded structs

func TestShouldBuildEmbedded(t *testing.T) {
	type Inner struct {
		Name string `prof:"name"`
	}
	type MyProduct struct {
		Inner
		Id int `prof:"id"`
	}
	type ProductDao struct {
		Insert func(e Executor, p MyProduct) (int64, error)    `proq:"insert into product(name) values(:p.Name:)" prop:"p"`
		Get    func(q Querier, name string) (MyProduct, error) `proq:"select * from product where name=:name:" prop:"name"`
	}

	productDao := ProductDao{}
	c := logger.WithLevel(context.Background(), logger.DEBUG)
	err := ShouldBuild(c, &productDao, Sqlite)
	if err != nil {
		t.Fatal(err)
	}
	db, err := sql.Open("sqlite3", "./proteus_test.db")
	if err != nil {
		t.Fatal(err)
	}

	exec, err := db.Begin()
	if err != nil {
		t.Fatal(err)
	}

	_, err = exec.Exec("CREATE TABLE product(id INTEGER PRIMARY KEY, name VARCHAR(100), null_field VARCHAR(100))")
	if err != nil {
		t.Fatal(err)
	}

	count, err := productDao.Insert(Wrap(exec), MyProduct{Inner: Inner{Name: "foo"}})
	if err != nil {
		t.Fatal(err)
	}
	if count != 1 {
		t.Fatal("Should have modified 1 row")
	}
	prod, err := productDao.Get(Wrap(exec), "foo")
	if err != nil {
		t.Fatal(err)
	}
	if prod.Name != "foo" {
		t.Fatal(fmt.Sprintf("Expected prod with name, got %+v", prod))
	}
}

This test properly creates a record in the database from an embedded field, but the field is not populated when the value is retrieved. The bug is in the mapper/mapper.go:117 function, and the code that builds up the colFieldMap (doesn't map to embedded fields, doesn't know how to populate them).

Implement embedded fields

When defining the DAO struct it would be great to be able to split it up into multiple structs and embed them.

type BaseDao struct {
  FindAll func(e proteus.Querier) ([]Type, error) `proq:"SELECT * FROM table"`
  CustomDao
}
type CustomDao struct {
  FindCustom func(e proteus.Querier, id string) (Type, error) `proq:"SELECT * FROM table WHERE id = :id:" prop:"id"`
}

The use case for us is as follows. We generate some standard functionality in the BaseDao stuct and would like to be able to regenerate this, without loosing the custom functionality added by CustomDao.

Is this possible with the reflection API? If you have some guidance to where this could be done, I'd be happy to contribute.

Running UPDATE statements with a ContextQuerier silently fails

I recently made the mistake of specifying a ContextQuerier where I needed a ContextExecutor.

Strangely enough, my test failed because the number of rows returned was zero. However, no error was returned from the function either.

I'm not sure if this is reasonable to ask or if it'd require Proteus to do something more intrusive like introspecting the query, but it might be nice if this edge case was signaled in a more disruptive fashion.

Add latest/current version information on the front page

I pulled the library like this go get -u github.com/jonbodner/proteus and I get version v0.14.0.

The contents of the file in the version that I have are different from what is on github.

proteus/query_mappers.go

Lines 19 to 22 in a187783

func (pm propFileMapper) Map(name string) string {
val, _ := pm.properties.Get(name)
return val
}

My version (v0.14.0)

image

What is the current up to date version?

let there be multiple instances of the proteus logger

Right now, there is exactly one logger in the system. It'd be better for the logger to work like the http client and server in the standard library: have a default implementation and allows instances to be constructed as well.

Prevent Queriers from having a single error output parameter

Right now, you can define a querier with 0, 1, or 2 output parameters. An error is only returned if there are two output parameters. If you use 1 output parameter of type error, the error is swallowed. Also, it doesn't make any sense to have a querier that doesn't query; you should return a value.

The fix is to make sure that if there is only a single parameter being returned it cannot be of type error.

SIGSEGV can occur when using structs with bad function return types

I recently tried to create and use a malformed function type like so. The return value is error whereas it should be (int64, error) to receive an error properly.

type UserDao struct {
  Insert func(ctx context.Context, e proteus.ContextExecutor, email string) error `proq:...`
}

I was able to instantiate this struct with proteus.Build, and everything worked out fine until I tried to use Insert, which failed with a confusing SIGSEGV at the point of failure.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.