Giter VIP home page Giter VIP logo

proteus's Issues

SIGSEGV can occur when using structs with bad function return types

I recently tried to create and use a malformed function type like so. The return value is error whereas it should be (int64, error) to receive an error properly.

type UserDao struct {
  Insert func(ctx context.Context, e proteus.ContextExecutor, email string) error `proq:...`
}

I was able to instantiate this struct with proteus.Build, and everything worked out fine until I tried to use Insert, which failed with a confusing SIGSEGV at the point of failure.

racey when benched ?

x-MacBook-Pro:bench apple$ go get github.com/mattn/go-sqlite3
x-MacBook-Pro:bench apple$ go test -bench=.
WARN[0000] skipping function FindById due to error: the 1st output parameter of an Executor must be int64
BenchmarkSelectProteus-4 panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x41a3553]

goroutine 6 [running]:
github.com/jonbodner/proteus/bench.BenchmarkSelectProteus(0xc420012160)
/Users/apple/workspace/go/src/github.com/jonbodner/proteus/bench/bench_test.go:33 +0x2b3
testing.(*B).runN(0xc420012160, 0x1)
/usr/local/opt/go/libexec/src/testing/benchmark.go:140 +0xb2
testing.(*B).run1.func1(0xc420012160)
/usr/local/opt/go/libexec/src/testing/benchmark.go:213 +0x5a
created by testing.(*B).run1
/usr/local/opt/go/libexec/src/testing/benchmark.go:214 +0x7f
exit status 2
FAIL github.com/jonbodner/proteus/bench 0.041s

--

THis SQL driver is buggy.
there is a better one now. See here :)

gobuffalo/buffalo#372

Could the current version be tagged

Could you please tag the current version of proteus as it's very different from 0.3.0. By default, godep will take the latest tagged version.

Interface as parameter to DAOs

Consider this scenario

type SortOrder string

const (
    ASC SortOrder = "ASC"
    DESC SortOrder = "DESC"
)

type FooDao struct {
    GetBars(ctx context.Context, q ContextQuerier, offset, count int, sortField string, sortOrder SortOrder) ([]Bar, error)
}

I was thinking having something like this would reduce the boilerplate code for may similarly designed DAOs

type Pageable interface {
    Offset() int
    Count() int
    SortField() string
    Order() SortOrder
}

type FooDao struct {
    GetBars(ctx context.Context, q ContextQuerier, p Pageable) ([]Bar, error) `proq:"q:paged_bars"`
}

Then the paged_bars query would be like this

SELECT *
FROM bar
ORDER BY 
CASE WHEN (:$1.SortField(): = 'updatedAt' AND :$1.Order(): = 'ASC') THEN updated_at END ASC
,CASE WHEN (:$1.SortField(): = 'updatedAt' AND :$1.Order(): = 'DESC') THEN updated_at END DESC
,CASE WHEN (:$1.SortField(): = 'age' AND :$1.Order(): = 'ASC') THEN age END ASC
,CASE WHEN (:$1.SortField(): = 'age' AND :$1.Order(): = 'DESC') THEN age END DESC
,CASE WHEN (:$1.SortField(): = 'active' AND :$1.Order(): = 'ASC') THEN active END ASC
,CASE WHEN (:$1.SortField(): = 'active' AND :$1.Order(): = 'DESC') THEN active END DESC
,CASE WHEN (:$1.SortField(): = 'name' AND :$1.Order(): = 'ASC') THEN (firstname || ' ' || lastname) END ASC
,CASE WHEN (:$1.SortField(): = 'name' AND :$1.Order(): = 'DESC') THEN (firstname || ' ' || lastname) END DESC
OFFSET :$1.Offset(): FETCH NEXT :$1.Count(): ROWS ONLY;

So the idea is to be able to pass interface types and call their functions. This is one specific example but there could be many. In my case, the pagination parameters come from different sources that may calculate the offset/size differently.

What's your take?

time.Time gets mapped as a nested struct in version 0.7 and later

After the support for nested structs in 0.7 (which is great!), fields of type time.Time are now incorrectly mapped as nested struct types since time.Time is a struct but it does not implement sql.Scanner. This causes all our time.Time fields not to be mapped correctly to our postgres db where we want to map directly to columns of type TIMESTAMP WITH TIMEZONE .

The culprit is the recurse in line 97 in mapper/mapper.go inside function buildColFieldMap:

			//if this is a struct, recurse
			// we are going to use an prof value as a prefix
			// then we go through each of the fields in the struct
			// if any of them aren't structs and have prof tags, we
			// prepend the parent struct and store off a fieldi
			// only if this doesn't implement a scanner. If it does, then go with the scanner
			if sf.Type.Kind() == reflect.Struct && !reflect.PtrTo(sf.Type).Implements(scannerType) {
				buildColFieldMap(sf.Type, childFieldInfo, colFieldMap)
			} else {
				colFieldMap[strings.SplitN(tagVal, ",", 2)[0]] = childFieldInfo
			}

I know mapping of time.Time is very driver specific - maybe add a special case handling this?

Handle spaces in the parameter list in the 'prop' tag

Currently, parameters must be specified without spaces after commas in the 'prop' tag, i.e.:

prop:"transactionID,userID"

is not the same as

prop:"transactionID, userID"

Trim for spaces after parsing the tag value.

I'll submit a PR shortly with the required change.

Implement embedded fields

When defining the DAO struct it would be great to be able to split it up into multiple structs and embed them.

type BaseDao struct {
  FindAll func(e proteus.Querier) ([]Type, error) `proq:"SELECT * FROM table"`
  CustomDao
}
type CustomDao struct {
  FindCustom func(e proteus.Querier, id string) (Type, error) `proq:"SELECT * FROM table WHERE id = :id:" prop:"id"`
}

The use case for us is as follows. We generate some standard functionality in the BaseDao stuct and would like to be able to regenerate this, without loosing the custom functionality added by CustomDao.

Is this possible with the reflection API? If you have some guidance to where this could be done, I'd be happy to contribute.

Running UPDATE statements with a ContextQuerier silently fails

I recently made the mistake of specifying a ContextQuerier where I needed a ContextExecutor.

Strangely enough, my test failed because the number of rows returned was zero. However, no error was returned from the function either.

I'm not sure if this is reasonable to ask or if it'd require Proteus to do something more intrusive like introspecting the query, but it might be nice if this edge case was signaled in a more disruptive fashion.

Logging and metrics

We started using proteus some months ago and released the first service using proteus to production last week. We have added logging outside of proteus, but would really like to have a means to instrument proteus with both logging and metrics. We're happy to submit a PR, but before doing so, I'd like to discuss how this should be implemented. For both logging and metrics, we want the statement as well as the execution time included. It could be implemented with an optional event listener interface the client provides upon configuration of proteus or by using a channel to propagate events on. @jonbodner do you have any thoughts on this?

Proteus does not properly populated fields in embedded structs

func TestShouldBuildEmbedded(t *testing.T) {
	type Inner struct {
		Name string `prof:"name"`
	}
	type MyProduct struct {
		Inner
		Id int `prof:"id"`
	}
	type ProductDao struct {
		Insert func(e Executor, p MyProduct) (int64, error)    `proq:"insert into product(name) values(:p.Name:)" prop:"p"`
		Get    func(q Querier, name string) (MyProduct, error) `proq:"select * from product where name=:name:" prop:"name"`
	}

	productDao := ProductDao{}
	c := logger.WithLevel(context.Background(), logger.DEBUG)
	err := ShouldBuild(c, &productDao, Sqlite)
	if err != nil {
		t.Fatal(err)
	}
	db, err := sql.Open("sqlite3", "./proteus_test.db")
	if err != nil {
		t.Fatal(err)
	}

	exec, err := db.Begin()
	if err != nil {
		t.Fatal(err)
	}

	_, err = exec.Exec("CREATE TABLE product(id INTEGER PRIMARY KEY, name VARCHAR(100), null_field VARCHAR(100))")
	if err != nil {
		t.Fatal(err)
	}

	count, err := productDao.Insert(Wrap(exec), MyProduct{Inner: Inner{Name: "foo"}})
	if err != nil {
		t.Fatal(err)
	}
	if count != 1 {
		t.Fatal("Should have modified 1 row")
	}
	prod, err := productDao.Get(Wrap(exec), "foo")
	if err != nil {
		t.Fatal(err)
	}
	if prod.Name != "foo" {
		t.Fatal(fmt.Sprintf("Expected prod with name, got %+v", prod))
	}
}

This test properly creates a record in the database from an embedded field, but the field is not populated when the value is retrieved. The bug is in the mapper/mapper.go:117 function, and the code that builds up the colFieldMap (doesn't map to embedded fields, doesn't know how to populate them).

Option to return values of type sql.Result to enable harvesting LastInsertId()

The sql.Result interface contains the LastInsertId() function, which can be important information useful for inserts/upserts.

It would be nice if there was a way that Proteus could return this information back to the caller. Perhaps by specifying sql.Result as a valid return type? My alternative right now is to execute something like last_insert_rowid() against my sqlite3 DB, which is prone to race conditions.

P.S. I keep posting 'nice-to-have' issues. Hope that's okay.
P.P.S. I'm highly motivated to raise a PR on this one, if maintainers are open to that.

Unable to INSERT into []byte column

I have a Dao with a []byte column. When I try to insert, I get
Received unexpected error pq: INSERT has more expressions than target columns

If I remove the reference to the []byte column, things work as requested. Does Proteus support inserting to []byte columns?

let there be multiple instances of the proteus logger

Right now, there is exactly one logger in the system. It'd be better for the logger to work like the http client and server in the standard library: have a default implementation and allows instances to be constructed as well.

proe tags do not work

The Build function only looks at proq tags in:
query, ok := curField.Tag.Lookup("proq")
If I tag something with proe instead of proq, it will not be converted to a function, and will cause a null pointer exception. This means the examples are probably broken.

Fail loudly when building the DAO

It would be great if we had an option to build a DAO and be certain that all fields are mapped as expected. Right now a warning is written to the logs but the building continues.

I'm thinking something like a proteus.MustBuild() function that returns an error on any encountered error. What du you think about this? Would you be interested in this feature if we provide a PR :)

sql.NullString doesn't work

If I try to query a column of type sql.NullString with no value in it, buildStruct fails with a message of the form:

Expected nil, but got: &errors.errorString{s:"Unable to assign nil value to non-pointer struct field ToAddressLine2 of type sql.NullString"}

Add support for ExecContext and QueryContext

The preferred way to support db queries is via the XXXContext methods. Proteus needs to support function fields where the first parameter is a context.Context and the second parameter is an implementation of interface that wraps ExecContext or QueryContext.

If a context.Context is provided, any logging information (level or default key/value pairs) will be used for the request. This allows per-request logging levels.

Select from list

Is there any way to do a query of the form:

SELECT * from myTable where id IN (:list:) ?

Prevent Queriers from having a single error output parameter

Right now, you can define a querier with 0, 1, or 2 output parameters. An error is only returned if there are two output parameters. If you use 1 output parameter of type error, the error is swallowed. Also, it doesn't make any sense to have a querier that doesn't query; you should return a value.

The fix is to make sure that if there is only a single parameter being returned it cannot be of type error.

Table-qualified prof tags

I have a table schema like:

CREATE TABLE table2 (
  "id" BIGSERIAL PRIMARY KEY,
  "uuid" VARCHAR(36) UNIQUE NOT NULL,
  "column2" VARCHAR(24) NOT NULL
);

CREATE TABLE table1 (
  "id" BIGSERIAL PRIMARY KEY,
  "uuid" VARCHAR(36) UNIQUE NOT NULL,
  "ref" VARCHAR(36) REFERENCES table2(uuid)
);

I'd like to do a query like SELECT * from table1 JOIN table2 ON table1.ref = table2.uuid, and have table2 referenced as a child struct. The problem is that both tables have id and uuid columns. Can I qualify the prof tags so that table2 has a prof tag of prof:"table2.uuid" for example? I know I can use "AS" clauses for each column, but this becomes a maintenance nightmare for any sizable number of queries.

Add a way to generate a single function

Proteus works by generating function implementations and attaching them to function fields on structs. There's no reason why it can't generate individual functions, too. The signature would look something like:

BuildFunction(f interface{}, query string, params []string) error

f must be a pointer to a function, query is the analog of the proq struct tag, and params is the analog of the prop struct tag.

Support for RETURNING "id" queries

I have an INSERT query that creates a new row on an autoincrement table. Is there any way to have a RETURNING "id" suffix to the query and get the new id?

Add latest/current version information on the front page

I pulled the library like this go get -u github.com/jonbodner/proteus and I get version v0.14.0.

The contents of the file in the version that I have are different from what is on github.

proteus/query_mappers.go

Lines 19 to 22 in a187783

func (pm propFileMapper) Map(name string) string {
val, _ := pm.properties.Get(name)
return val
}

My version (v0.14.0)

image

What is the current up to date version?

Support custom Scanner slices (like JSONB columns with Arrays)

This project seems great! While giving it a go with a project I'm doing in which, among a lot of things, I use a JSONB column in which I store an array of elements.

I've been using sqlx for named parameters so far which works well but is getting a bit out of hand which is why this "magic" DAO style seems very interesting.

The problem I'm facing is that the array of elements are a type which in turn is a slice which implement the Scanner interface to marshal as JSON.

Something like this:

type Records []*Record

func(r *Records) Scan(value interface{}) error {
	if row, ok := value.([]byte); ok {
		return json.Unmarshal(row, a)
	}
	return nil
}
func (r Records) Value() (driver.Value, error) {
	return json.Marshal(r)
}

When using proteus it expands the slice into multiple variables and my vague guess is that the issue might be that it checks if it's a slice before it checks if it implements the Scanner interface.

I might be able to write a test and a PR but I'm currently choked so I just wanted to give you a heads up and check if it sounds like the right path first.

Thanks!

Allow an external logger

The lib uses logrus for logging warnings and debug messages. It would be great if one could provide the needed logging functions when building the dao. Possibly with logrus as a fallback. Bundling logrus into this lib seems unnecessary to me. The log package from the standard library could be used as well and by this removing the dependency.

Is this something we could do? We can of cause provide a PR for it if so.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.