Giter VIP home page Giter VIP logo

aerospike-client-go's People

Contributors

andot avatar anlhord avatar ashishshinde avatar blide avatar bluele avatar charl avatar cstivers78 avatar erikdubbelboer avatar gdm85 avatar geertjohan avatar hamper avatar imkira avatar kalloc avatar khaf avatar korzha avatar ksedgwic avatar oldmantaiter avatar oss92 avatar pirsquare avatar pnutmath avatar realmgic avatar reugn avatar sarathsp06 avatar sinozu avatar sm4ll-3gg avatar sud82 avatar swarit-pandey avatar tvorog avatar venilnoronha avatar wchu-citrusleaf avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aerospike-client-go's Issues

AerospikeError in main aerospike package

AerospikeError is now in the types package which requires an extra import (which I also rename since types is too generic to have as an import name) when you want to check the errors.

Shouldn't AerospikeError be in the main aerospike package including the error number definitions? Or aren't library users supposed to do anything with this struct?

Best Practice: ordering records

Somewhat of a strange question, but it's been bothering me all morning.

Basic scenario

Ordering (sorting) complex types, which include an atomic value that can be ordered. i.e.:

Group{1, "My first group"}
Group{2, "My second group"}
Group{3, "My third group"}

When inserting those using client.PutBins, and retrieving them with client.ScanAll - they get returned in random order. And I would like those to be returned sorted 1 through 3. It would be an OK scenario if those values were the PKs.

The solution that works:

Having a seperate bin in a seperate key for manging the sequence in which they should be returned. It contains a string slice (index of the slice indicating which comes before which) and the value is the PK of the given group...
binvalue := []string{"1", "2", "3"}
Doesn't quite feel like this would be the best thing to do ...

The solution that's a bit harder to get working:

Using LargeLists. Since a Group is not atomic, I wouldn't even know where to start.

The solution I'd prefer not to:

Switch over to RDBMS.

-

Which would be the best practice in this given scenario? Or am I missing an obvious solution?
Thank you in advance.

Problem with UDF filter

Tried following the range_filter example and used the lua script.
http://www.aerospike.com/docs/guide/examples/range_filter.html

ERROR : [] /opt/aerospike/sys/udf/lua/sample_filters.lua:44: attempt to index global 'ldte' (a nil value)
This is what is in the stack: {"Ts":888,"V":0}

This is the params i included for lstack.Filter {"FieldName":"Ts","MinValue":100,"MaxValue":300}

Im not sure about the udf internals so im not sure what is going on. I used JSON marshal string to insert and used it for the filter params as in the example.

Please help

Thank you

unpacker.unpackBlob() returns a []byte that points to receiving buffer which is returned to pool

In unpacker.go:
func (upckr *unpacker) unpackBlob(count int) (interface{}, error) {
...
val = upckr.buffer[upckr.offset : upckr.offset+count]
...
return val, nil
}

However upckr.buffer is returned to the pool after the command finishes. So the returned blob values points to unclaimed memory buffers.

The symptom is that I ran client.execute() which returns a list of []byte that contains wrong value.

32bit support?

I have noticed aerospike-client-go does not build against x86 architectures.
The build fails with:

value.go:99: constant 9223372036854775807 overflows uint 

A simple attempt to fix the compilation error with:

-               if !Buffer.Arch64Bits || (val <= math.MaxInt64) {
+               if !Buffer.Arch64Bits || (uint64(val) <= uint64(math.MaxInt64)) {

and the compilation works but when I run the tests I get lots of failed tests.

I tried to fix some until I have just 9 out of 121 tests failing, but then when I look at the changes I have done and some parts of the code I don't quite understand I feel very uneasy about it such us:

  • why no return instruction in PackObject inside case uint after pckr.PackAULong(obj.(uint64))
  • why is the unpacker converting to uint32 something that was read as int32 but that you want as int64?
        case 0xce:
                val := uint32(Buffer.BytesToInt32(upckr.buffer, upckr.offset))
                upckr.offset += 4
                return int64(val), nil
  • why does unpacker care about the current architecture when in fact it only accepts 64bit?
        case 0xce:
                val := uint32(Buffer.BytesToInt32(upckr.buffer, upckr.offset))
                upckr.offset += 4

               if Buffer.Arch64Bits {
                       return int(val), nil
               }
               return

And the list goes on.

Question about type checking

I tried to put json struct into aerospike, and I have panic.
JSON decoder, has default rule for integer to decode as float64.

Aerospike doesn't allow float64, it accepts only intxx.

My question: Is it a good idea to gently converting floatxx to intxx?
For example:
var test1 float64 = 10 -> int64(10)
but
var test2 float64 = 10.1 -> TYPE_NOT_SUPPORTED

I can write pull-request for this case, If you will approve or I can reconvert structure in my code

Question about connection queue

I have noticed that the when the number of clients exceeds ClientPolicy.ConnectionQueueSize, all connections go directly to aerospike server.
I believe it would be better to have more control of what happens when the queue is full.
In my case, when all ConnectionQueueSize connections are being used, I would like to wait until one connection is freed, instead of creating one every single time.
This is because if we have a good number of clients sending an unpredictably large number of requests at a given time to the server, either side (but especially the server) may end up hitting the maximum number of TCP sockets/file descriptors available for the process and the connection will be immediately rejected.
The idea I have in mind is to change GetConnection function so that it uses the timeout to wait if no connection is available.
In this line of thought, I also thought it would be good to add another option similar to ConnectionQueueSize but instead calling it WaitingQueueSize (0 to use the current behavior and create a new connection everytime, -1 for unlimited, > 1 to wait up to N requests. after N, it would be immediately rejected)

This would be semantically compatible with the existing version.
What do you think of this?
If you think it's ok I am interested in submitting a PR for that.

Maps aren't being treated as maps in UDF

Whenever I store maps/json using the Go client and try to access that bin using UDF I keep getting the type as "userdata" instead of a table or a list.

Message: bad argument #1 to 'format' (string expected, got userdata)

package main

import (
    "fmt"
    as "github.com/aerospike/aerospike-client-go"
    "log"
    "strconv"
    "time"
)

const udfFilter = `
function queryTest(s, rec)
  local time = os.time()
  local function map_value_merger(mapval1, mapval2)
    return mapval1 + mapval2
  end
  local function reduce_results(a,b)
    return map.merge(a,b,map_value_merger)
  end

  function apply_totalmap_cxt(mymap, rec)
        local topics = rec["Data"]
        if topics ~= nil then
            for k in list.iterator(topics) do
                warn(k)
                if mymap[k] == nil then
                    mymap = 0
                end
                mymap[k] = mymap[k] + 1
      end
        end
    return mymap
  end
    return s : aggregate(map(), apply_totalmap_cxt) : reduce(reduce_results)
end

`


func main() {

    client, err := as.NewClient("127.0.0.1", 3000)
    if err != nil {
        log.Fatal(err)
    }

    key, _ := as.NewKey("test", "pets", "example_key")

    topics := []map[interface{}]interface{}{}
    topic1 := map[interface{}]interface{}{}
    topic1["animal"] = "Dogs"
    topic2 := map[interface{}]interface{}{}
    topic2["animal"] = "Cats"

    topics = append(topics, topic1)
    topics = append(topics, topic2)

    data := as.BinMap{
    "CreatedAt": time.Now().Unix(),
    "bin2": "An elephant is a mouse with an operating system",
        "Data": topics,
  }

    client.Put(nil, key, data)

    regTask, err := client.RegisterUDF(nil, []byte(udfFilter), "queryTest.lua", as.LUA)
    if err != nil {
        log.Fatal(err)
    }
    for {
        if err := <-regTask.OnComplete(); err == nil {
            break
        } else {
            log.Println(err)
        }
    }

    stm := as.NewStatement("test", "pets")
    stm.SetAggregateFunction("queryTest", "queryTest", nil, true)

    recordset, err := client.Query(nil, stm)
    if err != nil {
        log.Fatal(err)
    }

    results := map[string]int{}

L:
    for {
        select {
        case rec, _ := <-recordset.Records:
            if rec == nil {
                break L
            }
            if result, ok := rec.Bins["SUCCESS"].(map[interface{}]interface{}); ok {
                for k, v := range result {
                    key := ""
                    if r, ok := k.(int); ok {
                        key = fmt.Sprintf("%d", r)
                    }
                    if r, ok := k.(string); ok {
                        key = r
                    }
                    if r, ok := v.(int); ok {
                        results[key] = results[key] + r
                    }
                }
            }
        }
    }

    for k, v := range results {
        fmt.Println(k + " :: " + strconv.Itoa(v))
    }
}

Race condition during client authentication

The following sequence illustrates a possible race condition during client authentication.

https://github.com/aerospike/aerospike-client-go/blob/master/node.go#L223
https://github.com/aerospike/aerospike-client-go/blob/master/client.go#L1062
https://github.com/aerospike/aerospike-client-go/blob/master/cluster.go#L705-L710

I do not think changePassword should exist at all in cluster.go. Even if we protect the access to the password against race conditions, this will not propagate to other clients running in different processes/machines.
I understand this is a "utility" feature, but I think it should be the responsibility of the developers to reload the entire aerospike client.

Compile Error on master and tag 1.0.0

when I compile tag 1.0.0, it gives me error:

./cluster.go:26: imported and not used: "github.com/aerospike/aerospike-client-go/types"
./cluster.go:552: undefined: InvalidNodeErr
./command.go:22: imported and not used: "github.com/aerospike/aerospike-client-go/types"
./command.go:725: undefined: TimeoutErr
./read_command.go:75: undefined: buffer.MsgLenFromBytes
./unpacker.go:22: imported and not used: "github.com/aerospike/aerospike-client-go/types"
./unpacker.go:272: undefined: SerializationErr
./value.go:25: imported and not used: "github.com/aerospike/aerospike-client-go/types"
./value.go:123: undefined: TypeNotSupportedErr

when I compile master, it gives me error:
./value.go:565: undefined: buffer.VarBytesToInt64

Do you update the code recently? How to fix it?

Possible GC optimization for Value types

I use UDF extensively and I found every call of Client.Execute() needs to call a couple NewXxxValue(), which wraps a type in a struct. For primitive types which used most often (by me :)), It seems using type alias can avoid the allocation of a new struct.

Present:
type BytesValue struct {
bytes []byte
}

New:
type BytesValue []byte

Usually client code can do casting without call a new function. But we can still keep the new function for consistency and convenience:
func NewBytesValue(bytes []byte) BytesValue {
return BytesValue(bytes)
}

func (b BytesValue) estimateSize() int {
return len(b)
}

For List or Map, I am not in favor of adding a []byte to the ListValue struct and allocate the buffer and pack it at NewListValue(). I know it helps to estimate size of the final buffer so the final buffer can be allocated in one shot. But all these map, list values already allocated a buffer in order to achieve that. It seems maybe more efficient to let Go runtime to grow the buffer as needed, which very possibly involves less times of memory allocation. This is just my guess without profiling to support though :) I saw the reader() method uses the buffer but I think it's very easy to replace it with a WriteTo(io.Writer) method.

Optimisation computeDigest() in key.go?

I notice that the key.go: every time computing a digest for a key, it has to allocate a new slice. This will stress GC. In aerospike java client source code, I notice that thread-local buffer technique is used to improve this. But in Golang, there is no equivalent technique. I think why not to use a singleton buffer to hold data when computing key digest, instead of allocating a new slice many times?

Stop force-pushing the master branch

It breaks deployment everywhere:

=> Fetching github.com/aerospike/aerospike-client-go...
Cloning into '/home/travis/gopath/src/github.com/remerge/snap/.vendor/tmp/src/github.com/aerospike/aerospike-client-go'...
remote: Counting objects: 576, done.
remote: Compressing objects: 100% (178/178), done.
remote: Total 576 (delta 382), reused 551 (delta 369)
Receiving objects: 100% (576/576), 306.74 KiB | 0 bytes/s, done.
Resolving deltas: 100% (382/382), done.
Checking connectivity... done.
Checking out "d596cdfbd77da0ab48e99cb3262ec25ca3445615"
goop: Command failed with exit status 128

increment counter

Hi,

I'd like to implement an incremental counter.
What would be the fastest and most efficient way of doing this? Also, I would like to get the value of the bin once it is incremented. My concern is that I have to run 2 queries, 1 for updating the counter and one more for reading it. My worry is that with too many concurrent queries, there will be a gap between the update and read of the counter value. Hence giving an inaccurate value

Please advise.

Thanks

Timeout and connection pooling

In our production setup, we have several client servers querying the aerospike cluster(Get request). One of the server keeps having massive connection to the cluster in TIME_WAIT state.

We figured out today that it is because we are setting a timeout in the Get request, and we didn't realize that the timeout is also set as the socket timeout for the underlying connection.

readPolicy := aerospike.NewPolicy()
readPolicy.Timeout = 80 * time.Millisecond 
r, err := client.Get(readPolicy, k)

That server itself seems to have longer network lag with the cluster, but we still want to enforce the same timeout for the get in our app. In this case, the connection pooling is useless because we close the connections so fast.

Our temporary solution is to run this call in a go routine and enforcing the timeout on the go routine.
Wondering if there is any suggestions, or maybe the driver can choose to not close the socket.

Thanks

Decouple Go object marshaller/unmarshaller from Client

I came across the latest 1.4 release and found the new marshal/unmarshal feature is part of the core Client and are exposed by Client.PutObject/GetObject. I think it's better to decouple data serialization from the core aerospike Client. E.g. instead of adding two new methods PutObject/GetObject to Client object, I would define two func like Marshal(obj interface{}) (BinMap, error), Unmarshal(bins BinMap, obj interface{}) error, and put them in a separate package. User can choose to use it or not. If not using it, its code would not be included in the Client code. Right now the serialization code is in marshal.go and deserialization code is part of read_command.go. It increases unnecessary complexity to the core client code.

As a side note, object serialization is a hard problem. I don't see much benefit of this implementation over using protobuffer or json. But there are obvious drawbacks, (like compatibility issue with other languages, unclear to me how it handles schema evolving, top level variables serialization may be different from embedded variables etc..)

application crash during temporary network problem

We go temporary network problem in our prod env and it lead application crash.
Here's the log:

 panic: send on closed channel
 goroutine 22366696 [running]:
 github.com/aerospike/aerospike-client-go.func·006(0xc20b83afa0)
    /usr/go/src/github.com/aerospike/aerospike-client-go/client.go:437 +0x145
 created by github.com/aerospike/aerospike-client-go.(*Client).ScanAll
    /usr/go/src/github.com/aerospike/aerospike-client-go/client.go:440 +0x4d4

Protection against this issue already exists, client.go#L439, but that's not enough. Since there's still chance to send on closed channel.

    go func(node *Node) {
        if err := clnt.scanNode(&policy, node, res, namespace, setName, binNames...); err != nil {
            if _, ok := <-res.Errors; ok {
                res.Errors <- err //still could crash here
            }
        }
    }(node)

In this case recover() from application doesn't help because error occurs in separate go routine within aerospike-client-go

My suggestion is never rely on

...
if _, ok := <-res.Errors; ok {
...

DropIndex function does not work.

The DropIndex function completes without any error, but when I try to create an Index with the same name, an Index already exists error is returned.

Any way to get list of sets?

Aerospike has nice tool acl that I can use to fire SHOW SETS query. Just wondering if it's possible to do the same with the Go client.

Connect fails: domain lookups on IP fail

If I try to connect to an aerospike cluster by IP, I get output like this when I enable debug output.

2014/11/17 21:28:04 No connections available; seeding...
2014/11/17 21:28:04 Seeding the cluster. Seeds count: 1
2014/11/17 21:28:04 Seed 127.0.0.1:3000 failed: lookup 127.0.0.1: invalid domain name
2014/11/17 21:28:04 Tend finished. Live node count: 0
2014/11/17 21:28:04 No connections available; seeding...
2014/11/17 21:28:04 Seeding the cluster. Seeds count: 1
2014/11/17 21:28:04 Seed 127.0.0.1:3000 failed: lookup 127.0.0.1: invalid domain name
2014/11/17 21:28:04 Tend finished. Live node count: 0
2014/11/17 21:28:04 Failed to connect to host(s): [127.0.0.1:3000]

When I change the IP address to a valid hostname, I get output like this...

2014/11/17 21:23:51 No connections available; seeding...
2014/11/17 21:23:51 Seeding the cluster. Seeds count: 1
2014/11/17 21:23:51 Node Validator has 1 nodes.
2014/11/17 21:23:51 Seed ip-10-10-200-210.ec2.internal:3000 failed: lookup 10.10.200.210: invalid domain name
2014/11/17 21:23:51 Tend finished. Live node count: 0
2014/11/17 21:23:51 No connections available; seeding...
2014/11/17 21:23:51 Seeding the cluster. Seeds count: 1
2014/11/17 21:23:51 Node Validator has 1 nodes.
2014/11/17 21:23:51 Seed ip-10-10-200-210.ec2.internal:3000 failed: lookup 10.10.200.210: invalid domain name
2014/11/17 21:23:51 Tend finished. Live node count: 0
2014/11/17 21:23:51 Failed to connect to host(s): [ip-10-10-200-210.ec2.internal:3000]

You can see the program I'm using to test this issue here: https://gist.github.com/hopkinsth/2b2c68b1a116ff3f40c1

I'm running this on an EC2 instance; uname -r is 3.10.40-50.136.amzn1.x86_64.

The host I'm trying to connect to definitely runs the Aerospike server on that port. I'm separately using the aerospike Java library to connect to it.

Got many "EOF" errors after upgrade to 1.6.4

  • we got 6000+ "EOF" errors every day after upgrade to 1.6.3
  • when we rollback to 1.6.2, the number reduce to 100
  • we use a 3-nodes aerospike cluster in LAN, and the LAN is in high speed and very stable
  • any advice ?

thanks!

panic during query

we got error below:

panic: runtime error: slice bounds out of range

goroutine 192347 [running]:
github.com/aerospike/aerospike-client-go.(*baseMultiCommand).parseKey(0xc208a118a0, 0x979e, 0x4000, 0x0, 0x0)
/Users/cuixg/go/src/github.com/aerospike/aerospike-client-go/batch_command.go:100 +0x41a
github.com/aerospike/aerospike-client-go.(*queryRecordCommand).parseRecordResults(0xc2082d8060, 0x7f8a95f7f628, 0xc2082d8060, 0x1ffc0, 0x20300000001
ffc0, 0x0, 0x0)
/Users/cuixg/go/src/github.com/aerospike/aerospike-client-go/query_record_command.go:71 +0x6a4
github.com/aerospike/aerospike-client-go.(*baseMultiCommand).parseResult(0xc208a118a0, 0x7f8a95f7f628, 0xc2082d8060, 0xc208e6e580, 0x0, 0x0)
/Users/cuixg/go/src/github.com/aerospike/aerospike-client-go/batch_command.go:66 +0x102
github.com/aerospike/aerospike-client-go.(*queryCommand).parseResult(0xc208e9ec80, 0x7f8a95f7f628, 0xc2082d8060, 0xc208e6e580, 0x0, 0x0)
/Users/cuixg/go/src/github.com/aerospike/aerospike-client-go/query_command.go:41 +0x5b
github.com/aerospike/aerospike-client-go.(*baseCommand).execute(0xc208cff9b0, 0x7f8a95f7f628, 0xc2082d8060, 0x0, 0x0)
/Users/cuixg/go/src/github.com/aerospike/aerospike-client-go/command.go:989 +0x754
github.com/aerospike/aerospike-client-go.(*queryRecordCommand).Execute(0xc2082d8060, 0x0, 0x0)
/Users/cuixg/go/src/github.com/aerospike/aerospike-client-go/query_record_command.go:128 +0xa1
github.com/aerospike/aerospike-client-go.func·008()
/Users/cuixg/go/src/github.com/aerospike/aerospike-client-go/client.go:860 +0x33
created by github.com/aerospike/aerospike-client-go.(*Client).Query
/Users/cuixg/go/src/github.com/aerospike/aerospike-client-go/client.go:864 +0x385

We're using the latest version of aerospike-client-go with aerospike server 3.5.9
our application is a long-running server, doing query every several minutes. Before this crash, it has been serving for a week

Bug with parsing response from Aerospike node

Hello guys!
I faced an issue that appears rather randomly. Usually I observe it right after restarting all nodes in Aerospike cluster. It seems, Go client receives invalid response from Aerospike node and fails to parse it. Here is stacktrace (I removed some data because of our company's policy, cannot publish even the name of our project, sorry :))

{
    "message": "runtime error: slice bounds out of range",
    "data": {
        "FilePath": "github.com/aerospike/aerospike-client-go/utils/buffer/buffer.go",
        "Line": 112,
        "Stack": [
            "goroutine 113 [running]:",
            ...,
            "runtime.panic(0x8f0060, 0xc7632f) /usr/local/go/src/pkg/runtime/panic.c:248 +0x18d",
            "github.com/aerospike/aerospike-client-go/utils/buffer.BytesToInt32(0xc208430600, 0x200, 0x200, 0x2030004, 0xc202030000) github.com/aerospike/aerospike-client-go/utils/buffer/buffer.go:112 +0xca",
            "github.com/aerospike/aerospike-client-go.(*readCommand).parseRecord(0xc20867a230, 0x1, 0x3, 0x0, 0xfffffffff73a76b7, 0x0, 0x0, 0x0) github.com/aerospike/aerospike-client-go/read_command.go:155 +0xd3",
            "github.com/aerospike/aerospike-client-go.(*readCommand).parseResult(0xc20867a230, 0x7fcc206a0aa8, 0xc20867a230, 0xc208487fe0, 0x0, 0x0) github.com/aerospike/aerospike-client-go/read_command.go:113 +0x6c6",
            "github.com/aerospike/aerospike-client-go.(*baseCommand).execute(0xc20867a230, 0x7fcc206a0aa8, 0xc20867a230, 0x0, 0x0) github.com/aerospike/aerospike-client-go/command.go:813 +0x76d",
            "github.com/aerospike/aerospike-client-go.(*readCommand).Execute(0xc20867a230, 0x0, 0x0) github.com/aerospike/aerospike-client-go/read_command.go:224 +0x73",
            "github.com/aerospike/aerospike-client-go.(*Client).Get(0xc2080321b8, 0xc20826c7e0, 0xc2087b32c0, 0xc2087ebf90, 0x1, 0x1, 0xc2087b32c0, 0x0, 0x0) github.com/aerospike/aerospike-client-go/client.go:248 +0x121",

We have test cluster made of 2 nodes. Configuration of namespace:

        replication-factor 3
        memory-size 4G
        default-ttl 30d
        storage-engine memory

Is there any way to delete records using Query?

Hello guys!
I have a question about using Aerospike client. The problem I'm trying to solve is to delete records with the same primary key and different secondary keys. I can get those records with Query command, but how to remove them with a single query?
Retrieving records and then deleting them one by one is not an option :( Writing LUA scripts is not an option as well.

Thanks for any suggestion!

something about Expiration

first, Expiration in Record is actually int32, not int.
because read the Record back shows that Expiration can't be a larger number.

second, Code comments in Record says:

// Date record will expire, in seconds from Jan 01 2010 00:00:00 GMT
Expiration int

but I think it's seconds from now. Because If I set WritePolicy and write a Record, wait a few seconds, read the Record back, Expiration of the Record seems to be seconds from now. This code comments must be clarified.

Either packer.PackLong or packer.PackULong is broken

PackLong Looks like:

func (pckr *packer) PackLong(valType int, val int64) {
    pckr.buffer.WriteByte(byte(valType))
    pos := pckr.grow(_b8)
    pckr.buffer.Write(Buffer.Int64ToBytes(val, pckr.buffer.Bytes(), pos))
}

While PackULong looks like:

func (pckr *packer) PackULong(val uint64) {
    pckr.buffer.WriteByte(byte(0xcf))
    pos := pckr.grow(_b8)
    Buffer.Int64ToBytes(int64(val), pckr.buffer.Bytes(), pos)
}

Is the pckr.buffer.Write in PackLong needed or is it also needed in PackULong?

Support uint64 as key or value

Right now NewValue() or the like does not support uint64 that is greater than maxInt64.
It's inconvenient that I have to encode uint64 in some way before I can store it in Aerospike.
I think native support of uint64 is necessary.

NewClientWithPolicyAndHost has bug

if I provide a Host that unreachable as the first argument, NewClientWithPolicyAndHost will return error.

host1 := &Host{
    Name: "192.168.1.22",   // a non-exist host
    Port: 3000,
}
host2 := &Host{
    Name: "192.168.1.23",   // a existing host but without aerospike server
    Port: 3000,
}
host3 := &Host{
    Name: "192.168.1.24",   // a aerospike server host
    Port: 3000,
}

cli, err := NewClientWithPolicyAndHost(nil, host2, host3) // that's right

cli, err := NewClientWithPolicyAndHost(nil, host1, host3) // bug, should not return err but it do!

Lstack

Instead of peek and scan is there a way to obtain the records by a field in an lstack?

Thanks

Server returns ResultCode 21 - unlisted in result_code.go

Because of this I have an error with empty message. Thank you

UPDATE:
Maybe it is not the code, but any way it was in resultCode := cmd.dataBuffer[13] & 0xFF.
In the same time I found that line in my logs

Sep 21 2014 19:32:06 GMT: WARNING (rw): (thr_rw.c::3524) too large bin name 21 passed in, parameter error

My proposal: create more correct error reporting about too large bin names.

ScanAll is very slow

I haven't taken the time to run any benchmarks exactly, but it looks as if it's taking place on the actual ScanAll call or iteration of the results.

I have only 2 records:

func AccountsGet(w http.ResponseWriter, r *http.Request) {
    policy := NewScanPolicy()
    recs, err := client.ScanAll(policy, "test", "accounts")
    if err != nil {
        panic(err)
    }

    var accounts []Account
    for recs := range recs.Records {
        accounts = append(accounts, Account{
            Email:     rec.Bins["email"].(string),
            FirstName: rec.Bins["firstName"].(string),
        })
    }

    WriteJson(accounts)
}

This is taking ~400ms for just this handler (no middleware) as opposed to ~5ms just using Get() on one key.

Sorry for not digging into it any further. I'm just checking if I'm doing it right. It's basically a copy-paste from https://github.com/aerospike-labs/stock-exchange/blob/8db259acc836182c4f2494b89e3ba6d032be08d7/exchange/api.go#L15

Is this expected performance?

ExecuteUDF hangs forever

Hi guys,

I'm currently running into an issue with ExecuteUDF. The first call works without any issue, however the second call never returns.

It is stuck in the for loop : https://github.com/aerospike/aerospike-client-go/blob/master/server_command.go#L41

On the server side, the call first call to the UDF finishes without any issues.

Sep 04 14:36:03 aerospike aerospike[24153]: Sep 04 2014 12:36:03 GMT: INFO (scan): (thr_tscan.c::1202) SCAN JOB DONE  [id =1298498081: ns= test set=users scanned=1 expired=0 set_diff=3 elapsed=214 (ms)]

with aql

aql> show scans
+---------------------+--------+-------------+-----------+--------------+----------+-------------+-------------------+------------+--------------+---------+----------+------------------+--------+--------+------------+--------------+-----------+
| udf-avg-runtime(ms) | ns     | udf-updated | recs_read | udf-filename | run_time | udf-success | udf-function      | trid       | job-progress | set     | priority | job-type         | module | status | udf-failed | net_io_bytes | mem_usage |
+---------------------+--------+-------------+-----------+--------------+----------+-------------+-------------------+------------+--------------+---------+----------+------------------+--------+--------+------------+--------------+-----------+
|                     | "test  |             |           | "audience    | 21       |             | "delete_audience  | 129849808  | 10           | "users  |          | "BACKGROUND_UDF  | "scan  | "DONE  |            | 3            |           |
+---------------------+--------+-------------+-----------+--------------+----------+-------------+-------------------+------------+--------------+---------+----------+------------------+--------+--------+------------+--------------+-----------+
1 row in set (0.000 secs)

But the second call fails (and the go code hangs)

Sep 04 14:36:59 aerospike aerospike[24153]: Sep 04 2014 12:36:59 GMT: INFO (scan): (thr_tscan.c::664) scan job received
Sep 04 14:36:59 aerospike aerospike[24153]: Sep 04 2014 12:36:59 GMT: INFO (scan): (thr_tscan.c::715) scan_option 0x0 0x64
Sep 04 14:36:59 aerospike aerospike[24153]: Sep 04 2014 12:36:59 GMT: DEBUG (udf): (udf_rw.c:udf_call_init:368) UDF scan background op received
Sep 04 14:36:59 aerospike aerospike[24153]: Sep 04 2014 12:36:59 GMT: INFO (scan): (thr_tscan.c::766) NO bins specified select all
Sep 04 14:36:59 aerospike aerospike[24153]: Sep 04 2014 12:36:59 GMT: INFO (scan): (thr_tscan.c::800) scan option: Fail if cluster change False
Sep 04 14:36:59 aerospike aerospike[24153]: Sep 04 2014 12:36:59 GMT: INFO (scan): (thr_tscan.c::801) scan option: Background Job False
Sep 04 14:36:59 aerospike aerospike[24153]: Sep 04 2014 12:36:59 GMT: INFO (scan): (thr_tscan.c::802) scan option: priority is 0 n_threads 3 job_type 1
Sep 04 14:36:59 aerospike aerospike[24153]: Sep 04 2014 12:36:59 GMT: INFO (scan): (thr_tscan.c::803) scan option: scan_pct is 100
Sep 04 14:36:59 aerospike aerospike[24153]: Sep 04 2014 12:36:59 GMT: WARNING (scan): (thr_tscan.c::807) not starting scan 1298498081 because rchash_put() failed with error -4
Sep 04 14:36:59 aerospike aerospike[24153]: Sep 04 2014 12:36:59 GMT: INFO (tsvc): (thr_tsvc.c::388) Scan failed with error -2

It looks like the transaction id is the same as the one in the first call, which is strange. Therefor rchash_put_unique seems to return RCHASH_ERR_FOUND (https://github.com/aerospike/aerospike-server/blob/master/as/src/base/thr_tscan.c#L806).

The transaction id always seems to be the same.

If I try to delete the scan:

aql> show scans
+---------------------+--------+-------------+-----------+--------------+----------+-------------+-------------------+------------+--------------+---------+----------+------------------+--------+--------+------------+--------------+-----------+
| udf-avg-runtime(ms) | ns     | udf-updated | recs_read | udf-filename | run_time | udf-success | udf-function      | trid       | job-progress | set     | priority | job-type         | module | status | udf-failed | net_io_bytes | mem_usage |
+---------------------+--------+-------------+-----------+--------------+----------+-------------+-------------------+------------+--------------+---------+----------+------------------+--------+--------+------------+--------------+-----------+
|                     | "test  |             |           | "audience    | 23       |             | "delete_audience  | 129849808  | 10           | "users  |          | "BACKGROUND_UDF  | "scan  | "DONE  |            | 3            |           |
+---------------------+--------+-------------+-----------+--------------+----------+-------------+-------------------+------------+--------------+---------+----------+------------------+--------+--------+------------+--------------+-----------+
1 row in set (0.000 secs)
OK

aql> kill_query 129849808
OK

aql> show scans
+---------------------+--------+-------------+-----------+--------------+----------+-------------+-------------------+------------+--------------+---------+----------+------------------+--------+--------+------------+--------------+-----------+
| udf-avg-runtime(ms) | ns     | udf-updated | recs_read | udf-filename | run_time | udf-success | udf-function      | trid       | job-progress | set     | priority | job-type         | module | status | udf-failed | net_io_bytes | mem_usage |
+---------------------+--------+-------------+-----------+--------------+----------+-------------+-------------------+------------+--------------+---------+----------+------------------+--------+--------+------------+--------------+-----------+
|                     | "test  |             |           | "audience    | 23       |             | "delete_audience  | 129849808  | 10           | "users  |          | "BACKGROUND_UDF  | "scan  | "DONE  |            | 3            |           |
+---------------------+--------+-------------+-----------+--------------+----------+-------------+-------------------+------------+--------------+---------+----------+------------------+--------+--------+------------+--------------+-----------+
1 row in set (0.000 secs)
OK

and in the logs

Sep 04 14:43:24 aerospike aerospike[24153]: Sep 04 2014 12:43:24 GMT: INFO (scan): (thr_query.c::499) Query job with transaction id [129849808] does not exist

Any ideas?

gopkg.in support

Is there any way I could convince you to change tag or branch versioning to make this package available via gopkg.in?

Essentially a simple v prefix on the tag would be sufficient. Instead of 1.0.0 the tag would be v1.0.0.

From gopkg.in docs: http://labix.org/gopkg.in#SupportedURLs

I can already see that you use the current versioning scheme throughout all the client implementations so I'm not sure this would be acceptable.

Bonus points for a mirror in the form of github.com/go-aerospike/aerospike as this would make gopkg.in packages even more concise (gopkg.in/aerospike.v1).

Best,
Alex

tool/benchmark compile error

    # github.com/aerospike/aerospike-client-go/tools/benchmark
    ./benchmark.go:34: imported and not used: "github.com/aerospike/aerospike-client-go/types"
    ./benchmark.go:233: undefined: ErrTimeout

List secondary indexes

I haven't been able to do get the list of indexes in the namespace. the equivalent to

show indexes <namespace>

in aql

int64 vs int

Hi, when I store a value as 'int64' type, it is ok. But when I read the value back, the type is 'int'. Is it a bug?

major flaw in msgpack encoding for maps

Today I noticed that thousands of records in our production database - written with the current aerospike-go client - are invalid (kind of a horror scenario). Reading one of these records results in invalid data or panics the library.

This happens because a type check for uint64 is missing while msgpack encoding a map. The result is that the encoded msgpack is invalid. Sometimes reading it causes a panic, sometimes it returns mixed up data. Furthermore the binary data can not be decoded anymore by any msgpack library.

Example:

    host := aerospike.NewHost("0.0.0.0", 4000)
    client, _ := aerospike.NewClientWithPolicyAndHost(nil, host)
    key, _ := aerospike.NewKey("test", "test", []byte("test"))
    err := client.Put(nil, key, aerospike.BinMap{"3": map[interface{}]interface{}{"error": uint64(12346789), "number": 77777, "abc": "efg"}})
    if err != nil {
        fmt.Println(err)
    }
    rec, err := client.Get(nil, key)
    if err != nil {
        fmt.Println(err)
    }
    fmt.Println(rec.Bins)

Running this produces:

map[3:map[error:number 77777:abc efg:0]]

The fix for PackObject / unpackObject look quite simple to me. A far as I know msgpack supports uint64 inside maps. If there is any reason not to support this - there should be an error during the encoding phase.

[Cosmetics] Godoc incorrectly written

The guidelines at http://blog.golang.org/godoc-documenting-go-code are not always followed.

When I see this, for example (in file client.go, lines 73-76):

//  Determine if we are ready to talk to the database server cluster.
func (clnt *Client) IsConnected() bool {
    return clnt.cluster.IsConnected()
}

it should be written as:

//  IsConnected determines if we are ready to talk to the database server cluster.
func (clnt *Client) IsConnected() bool {
    return clnt.cluster.IsConnected()
}

There are quite some places where this isn't correctly done.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.