Giter VIP home page Giter VIP logo

radix's Introduction

Radix

Radix is a full-featured Redis client for Go. See the reference links below for documentation and general usage examples.

v3 Documentation

v4 Documentation

Please open an issue, or start a discussion in the chat, before opening a pull request!

Features

  • Standard print-like API which supports all current and future redis commands.

  • Connection pool which uses connection sharing to minimize system calls.

  • Full support for Sentinel and Cluster.

  • Helpers for EVAL, SCAN, Streams, and Pipelining.

  • Support for pubsub, as well as persistent pubsub wherein if a connection is lost a new one transparently replaces it.

  • API design allows for custom implementations of nearly anything.

Versions

There are two major versions of radix being supported:

  • v3 is the more mature version, but lacks the polished API of v4. v3 is only accepting bug fixes at this point.

  • v4 has feature parity with v3 and more! The biggest selling points are:

    • More polished API.
    • Full RESP3 support.
    • Support for context.Context on all blocking operations.
    • Connection sharing (called "implicit pipelining" in v3) now works with Pipeline and EvalScript.

    View the CHANGELOG for more details.

Installation and Usage

Radix always aims to support the most recent two versions of go, and is likely to support others prior to those two.

Module-aware mode:

go get github.com/mediocregopher/radix/v3
// import github.com/mediocregopher/radix/v3

go get github.com/mediocregopher/radix/v4
// import github.com/mediocregopher/radix/v4

Testing

# requires a redis server running on 127.0.0.1:6379
go test github.com/mediocregopher/radix/v3
go test github.com/mediocregopher/radix/v4

Benchmarks

Benchmarks were run in as close to a "real" environment as possible. Two GCE instances were booted up, one hosting the redis server with 2vCPUs, the other running the benchmarks (found in the bench directory) with 16vCPUs.

The benchmarks test a variety of situations against many different redis drivers, and the results are very large. You can view them here. Below are some highlights (I've tried to be fair here):

For a typical workload, which is lots of concurrent commands with relatively small amounts of data, radix outperforms all tested drivers except redispipe:

BenchmarkDrivers/parallel/no_pipeline/small_kv/radixv4-64                    	17815254	      2917 ns/op	     199 B/op	       6 allocs/op
BenchmarkDrivers/parallel/no_pipeline/small_kv/radixv3-64                    	16688293	      3120 ns/op	     109 B/op	       4 allocs/op
BenchmarkDrivers/parallel/no_pipeline/small_kv/redigo-64                     	 3504063	     15092 ns/op	     168 B/op	       9 allocs/op
BenchmarkDrivers/parallel/no_pipeline/small_kv/redispipe_pause150us-64       	31668576	      1680 ns/op	     217 B/op	      11 allocs/op
BenchmarkDrivers/parallel/no_pipeline/small_kv/redispipe_pause0-64           	31149280	      1685 ns/op	     218 B/op	      11 allocs/op
BenchmarkDrivers/parallel/no_pipeline/small_kv/go-redis-64                   	 3768988	     14409 ns/op	     411 B/op	      13 allocs/op

The story is similar for pipelining commands concurrently (radixv3 doesn't do as well here, because it doesn't support connection sharing for pipeline commands):

BenchmarkDrivers/parallel/pipeline/small_kv/radixv4-64                       	24720337	      2245 ns/op	     508 B/op	      13 allocs/op
BenchmarkDrivers/parallel/pipeline/small_kv/radixv3-64                       	 6921868	      7757 ns/op	     165 B/op	       7 allocs/op
BenchmarkDrivers/parallel/pipeline/small_kv/redigo-64                        	 6738849	      8080 ns/op	     170 B/op	       9 allocs/op
BenchmarkDrivers/parallel/pipeline/small_kv/redispipe_pause150us-64          	44479539	      1148 ns/op	     316 B/op	      12 allocs/op
BenchmarkDrivers/parallel/pipeline/small_kv/redispipe_pause0-64              	45290868	      1126 ns/op	     315 B/op	      12 allocs/op
BenchmarkDrivers/parallel/pipeline/small_kv/go-redis-64                      	 6740984	      7903 ns/op	     475 B/op	      15 allocs/op

For larger amounts of data being transferred the differences become less noticeable, but both radix versions come out on top:

BenchmarkDrivers/parallel/no_pipeline/large_kv/radixv4-64                    	 2395707	     22766 ns/op	   12553 B/op	       4 allocs/op
BenchmarkDrivers/parallel/no_pipeline/large_kv/radixv3-64                    	 3150398	     17087 ns/op	   12745 B/op	       4 allocs/op
BenchmarkDrivers/parallel/no_pipeline/large_kv/redigo-64                     	 1593054	     34038 ns/op	   24742 B/op	       9 allocs/op
BenchmarkDrivers/parallel/no_pipeline/large_kv/redispipe_pause150us-64       	 2105118	     25085 ns/op	   16962 B/op	      11 allocs/op
BenchmarkDrivers/parallel/no_pipeline/large_kv/redispipe_pause0-64           	 2354427	     24280 ns/op	   17295 B/op	      11 allocs/op
BenchmarkDrivers/parallel/no_pipeline/large_kv/go-redis-64                   	 1519354	     35745 ns/op	   14033 B/op	      14 allocs/op

All results above show the high-concurrency results (-cpu 64). Concurrencies of 16 and 32 are also included in the results, but didn't show anything different.

For serial workloads, which involve a single connection performing commands one after the other, radix is either as fast or within a couple % of the other drivers tested. This use-case is much less common, and so when tradeoffs have been made between parallel and serial performance radix has general leaned towards parallel.

Serial non-pipelined:

BenchmarkDrivers/serial/no_pipeline/small_kv/radixv4-16 	  346915	    161493 ns/op	      67 B/op	       4 allocs/op
BenchmarkDrivers/serial/no_pipeline/small_kv/radixv3-16 	  428313	    138011 ns/op	      67 B/op	       4 allocs/op
BenchmarkDrivers/serial/no_pipeline/small_kv/redigo-16  	  416103	    134438 ns/op	     134 B/op	       8 allocs/op
BenchmarkDrivers/serial/no_pipeline/small_kv/redispipe_pause150us-16         	   86734	    635637 ns/op	     217 B/op	      11 allocs/op
BenchmarkDrivers/serial/no_pipeline/small_kv/redispipe_pause0-16             	  340320	    158732 ns/op	     216 B/op	      11 allocs/op
BenchmarkDrivers/serial/no_pipeline/small_kv/go-redis-16                     	  429703	    138854 ns/op	     408 B/op	      13 allocs/op

Serial pipelined:

BenchmarkDrivers/serial/pipeline/small_kv/radixv4-16                         	  624417	     82336 ns/op	      83 B/op	       5 allocs/op
BenchmarkDrivers/serial/pipeline/small_kv/radixv3-16                         	  784947	     68540 ns/op	     163 B/op	       7 allocs/op
BenchmarkDrivers/serial/pipeline/small_kv/redigo-16                          	  770983	     69976 ns/op	     134 B/op	       8 allocs/op
BenchmarkDrivers/serial/pipeline/small_kv/redispipe_pause150us-16            	  175623	    320512 ns/op	     312 B/op	      12 allocs/op
BenchmarkDrivers/serial/pipeline/small_kv/redispipe_pause0-16                	  642673	     82225 ns/op	     312 B/op	      12 allocs/op
BenchmarkDrivers/serial/pipeline/small_kv/go-redis-16                        	  787364	     72240 ns/op	     472 B/op	      15 allocs/op

Serial large values:

BenchmarkDrivers/serial/no_pipeline/large_kv/radixv4-16                      	  253586	    217600 ns/op	   12521 B/op	       4 allocs/op
BenchmarkDrivers/serial/no_pipeline/large_kv/radixv3-16                      	  317356	    179608 ns/op	   12717 B/op	       4 allocs/op
BenchmarkDrivers/serial/no_pipeline/large_kv/redigo-16                       	  244226	    231179 ns/op	   24704 B/op	       8 allocs/op
BenchmarkDrivers/serial/no_pipeline/large_kv/redispipe_pause150us-16         	   80174	    674066 ns/op	   13780 B/op	      11 allocs/op
BenchmarkDrivers/serial/no_pipeline/large_kv/redispipe_pause0-16             	  251810	    209890 ns/op	   13778 B/op	      11 allocs/op
BenchmarkDrivers/serial/no_pipeline/large_kv/go-redis-16                     	  236379	    225677 ns/op	   13976 B/op	      14 allocs/op

Copyright and licensing

Unless otherwise noted, the source files are distributed under the MIT License found in the LICENSE.txt file.

radix's People

Contributors

bpowers avatar chapsuk avatar chzyer avatar d-marcfrank avatar daniel-santos avatar dprofeta avatar eloycoto avatar fabiokung avatar gaboose avatar gibsn avatar hayesgm avatar icey129 avatar imkira avatar kimtree avatar kixelated avatar lithograph avatar maciej avatar mgood avatar mstoykov avatar mwf avatar nilslice avatar nussjustin avatar nussjustin-hmmh avatar ollevche avatar rouzier avatar ulfurinn avatar vharitonsky avatar xakep666 avatar yushizhao avatar zachary1991 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

radix's Issues

Does radix support go 1.9.3?

Hi,I see you docs mentioned

This project's name was recently changed from radix.v3 to radix, to account for go's new module system. As long as you are using the latest update of your major go version (1.9.7+, 1.10.3+, 1.11+) the module-aware go get should work correctly with the new import

I' was wondering does radix support go 1.9.3? Except for change import path, is there any other incompatibility problems?

README memory benchmark results are outdated (in a bad sense)

~/go/src/github.com/mediocregopher/radix.v3$ go test -run FoO -bench GetSet -benchmem
goos: linux
goarch: amd64
pkg: github.com/mediocregopher/radix.v3
BenchmarkSerialGetSet/radix-8              50000             36969 ns/op            1092 B/op         18 allocs/op
BenchmarkSerialGetSet/redigo-8             50000             34345 ns/op              86 B/op          5 allocs/op
BenchmarkParallelGetSet/radix-8           100000             12359 ns/op            1126 B/op         19 allocs/op
BenchmarkParallelGetSet/redigo-8          200000             11311 ns/op             121 B/op          6 allocs/op
PASS
ok      github.com/mediocregopher/radix.v3      8.285s

Redigo benchmarks are whack

goos: linux
goarch: arm
pkg: github.com/mediocregopher/radix.v3
...
BenchmarkSerialGetSet/radix-4      10000            493083 ns/op              99 B/op          6 allocs/op
BenchmarkSerialGetSet/redigo-4      5000            208302 ns/op              46 B/op          5 allocs/op
BenchmarkParallelGetSet/radix-4                      100          20659156 ns/op            5179 B/op         26 allocs/op
BenchmarkParallelGetSet/redigo-4                   20000             60535 ns/op              63 B/op          6 allocs/op

I'm not super concerned about the serial ones, but the parallel case (Pool) definitely needs to be improved. I don't know where all those extra allocs came from, possibly 7b5d070.

Will Pool support max connections setting?

func NewPool(network, addr string, size int, opts ...PoolOpt) (*Pool, error)

NewPool creates a *Pool which will keep open at least the given number of connections to the redis instance at the given address.

So can we limit the max connections be opened?

Check out RedisPipe and figure out if its strategy of implicit pipelining/batching can be incorporated

https://github.com/joomcode/redispipe

That project essentially uses a batching concept to implicitly pipeline all commands, and uses a WritePause timeout to batch commands together into single system reads/writes. I think doing something similar in radix might be possible, though I haven't really thought out how yet. I think the main challenge will be in coordinating commands across Pool's connections in a way which utilizes the batching to its fullest extent.

How to subscribe keyspace notifications in cluster mode?

I need to listen expired events in my application. As I know, keyspace notifications are node-specific unlike regular pub/sub. So, to catch expired events for all keys I need to attach all nodes in cluster mode. How can I achieve this with radix? I couldn't find any solution.

Install Issue

go get github.com/mediocregopher/radix.v3

github.com/mediocregopher/radix.v3/resp

src/github.com/mediocregopher/radix.v3/resp/resp.go:749: reader.Reset undefined (type *bytes.Reader has no field or method Reset)

sync.Pool has no Reset method...

go-redis vs radix performance

I tested radix against go-redis on a caching library I am developing https://github.com/gadelkareem/cachita and got better results with go-redis.
go-redis: https://github.com/gadelkareem/cachita/tree/test/go-redis

BenchmarkRedisCacheWithInt-8             	     500	   2764354 ns/op	    2525 B/op	      67 allocs/op
BenchmarkRedisCacheWithString-8          	     500	   2927454 ns/op	    2548 B/op	      68 allocs/op
BenchmarkRedisCacheWithMapInterface-8    	     500	   2954466 ns/op	    5167 B/op	      95 allocs/op
BenchmarkRedisCacheWithStruct-8          	     500	   2884623 ns/op	    6802 B/op	     117 allocs/op

radix: https://github.com/gadelkareem/cachita/tree/test/radix

BenchmarkRedisCacheWithInt-8             	     500	   4158970 ns/op	    2282 B/op	      62 allocs/op
BenchmarkRedisCacheWithString-8          	     300	   4136612 ns/op	    2316 B/op	      63 allocs/op
BenchmarkRedisCacheWithMapInterface-8    	     300	   4150110 ns/op	    4703 B/op	      89 allocs/op
BenchmarkRedisCacheWithStruct-8          	     300	   4274845 ns/op	    6241 B/op	     111 allocs/op

I prefer the radix syntax but I had to do the switch for performance. Although I used FlatCmd to avoid resp.Marshaler as I am handling it using msgpack.

Getting issue while using Radix V3

Hi,
I didn't find the migration guide from V2 to V3. I face a lot issue while trying to sample.
I found working sample and trying to convert it on V3 but didn't get success

http://www.alexedwards.net/blog/working-with-redis

db.Do(radix.FlatCmd(exists, "EXISTS", "album:2"))
can't unmarshal into int

I tried many option but....

If you can guide me how we can utilize the Radix v3 for provide me samples it will very helpful.

Thanks,
Dinesh Gupta

Database selection with pooled connections

I ran into a pretty confounding issue around pool usage and the SELECT command when trying to select an unused database index for the test (so we can safely FLUSHDB after the test concludes).

In testing, I first create a new client pool (radix.NewPool) with size > 1. Then, I use the pool to SELECT 1, or some other non-zero database index.

It took me a while to realize that this only affects a single connection in the pool 🙈. I realized what was happening once I opened up a MONITOR on my localhost Redis and saw some commands going to the selected database and others going to the default database 0.

I'm wondering what the most "idiomatic" approach is to ensure that all pool connections have selected the same database?

Current approach (causes confusing behavior)

func selectUnusedDatabase(client radix.Client) (int, error) {
  for i := 1; i < 16; i++ {
    var size int

    p := radix.Pipeline(
      radix.Cmd(nil, "SELECT", strconv.Itoa(i)),
      radix.Cmd(&size, "DBSIZE"),
    )
		
    if err := client.Do(p); err != nil {
      return 0, err
    }

    if size == 0 {
      return i, nil
    }
  }

  return 0, errors.New("all databases are used")
}

func example() {
  client, err := radix.NewPool("tcp", "localhost:6379", 20, radix.Dial)
  if err != nil {
    panic(err)
  }

  idx, err := selectUnusedDatabase(client)
  if err != nil {
    panic(err)
  }
  // We've selected an unused database for only one of our pooled connections :(
}

Maybe?

func example() {
  // Use a one-off client to identify empty databases.
  peeker, err := radix.NewConn("tcp", "localhost:6379")
  if err != nil {
    panic(err)
  }

  idx, err := selectUnusedDatabase(peeker)
  if err != nil {
    panic(err)
  }
  peeker.Close()

  // Initialize a pool with a DialFunc that selects database `idx`.
}

Possible deadlock around pool.Close()

Hello!

I'm seeing some deadlock behavior when closing the pool. I don't immediately see where it could be occurring, but I thought I'd raise an issue.

Will fill in details as I discover them.

Overlowing Redis connections

Using the radix.v3 client, my Go app will sporadically overflow a Redis db running on localhost. My Go app is using a single radix.v3 Redis pool of size 500. In a redis-cli client I can see connected_clients:10000 which is the max limit. I suspect all the redis connections are erroring out, the Go client is attempting to open new connections faster than Redis can handle. Redis thinks it's maxed out at 10k connections.

Restarting the Go app temporarily resolves the problem, but after a few hours, or maybe a day, the issue occurs again.

FYI, the Redis db is running on localhost. It's running the latest 4.0.11. Nothing else is using Redis except a manual redis-cli client.

Any ideas on how to resolve this?

PubSub pattern matching only works with *

PSUBSCRIBE supports 3 glob style patterns but the package currently only handles * wildcards.

There is really no need to do custom pattern matching since Redis already returns the matched pattern for messages and will return a message multiple times when it matches multiple subscribed patterns. The PubSubConn already keeps subscribtions by pattern so it could just do a simple map lookup instead of trying to match all known patterns.

This is even noted in the Pub/Sub docs.

Deadlocks on pubsub.close and/or pubsub.unsubscribe

Hey, we are experiencing some deadlocks with close() and unsubscribe(). We currently running a setup where we use websockets and rooms. Sometimes we need to close/tear down the room, and that triggers a method on the rooms that calls for the unsubscribing.

Now, subscribe on my structure is a method that receives an output channel of "messages" structure and a done channel. It runs in it's own routine and starts an endless for loop. The endless loop is the one reading from the pubsub subscribe channel, then transforms the []byte into messages and it is sent to output channel.

When the cancel signal comes, done channel receives a value and I called the unsubscribe method. 3/4 of the time, the unsubscribe would just block waiting for the mutex of the connection to be freed. It never happens. And I have no idea or control on who has the lock and why it is stuck.

I have a defer code that tells me when the routine ends, again 3/4 of the times it doesn't and I can still ping point it to the unsubscribe.

Now, I originally tried one "global connection" approach thinking that I was going to be able use "channels" to multiplex the connect. Every new subscribe goes to a different channel. Now the deadlock was bringing the server down because all redis conn just stop working.

So I went ahead and now I create one conn per subscription, that avoid all the system getting freezes. And I stopped unsubscribing altogether and now I just close the conn. Upon logs inspection, I can see the close hanging just like unsubscribe was.

Redis stream additions

This proposal isn't full-fledged as is but is meant as a starting point.

Problem

Redis 5 is out and we can finally use streams. Unfortunately the values returned by the stream commands are quiet more complex than those returned by most other commands, which makes them awkward to use with this package, especially when efficiency is important (which is quiet common when using Redis, at least for my use cases).

Examples

XRANGE returns an (RESP) array of "stream entries" which are themselves arrays of 2 elements of which the first is the entry ID and the second another array of key-value pairs. The entry ID consists of two unsigned 64 bit integers (time and sequence number) combined with a dash.

Also when paginating a stream via XRANGE the entry IDs are inclusive so one must increment (or decrement if using XREVRANGE) the ID which means parsing the ID, incrementing/decrementing the sequence part and, if the value over- or underflows, incrementing/decrementing the time part and setting the sequence part to 0/math.MaxUint64.

> XRANGE somestream - +
1) 1) 1526985054069-0
   2) 1) "duration"
      2) "72"
      3) "event-id"
      4) "9"
      5) "user-id"
      6) "839248"
2) 1) 1526985069902-0
   2) 1) "duration"
      2) "415"
      3) "event-id"
      4) "2"
      5) "user-id"
      6) "772213"
... other entries here ...

XREAD and XREADGROUP return an array of 2 element arrays that consist of the stream group and an array of entries in the format returned by XRANGE.

> XREAD COUNT 2 STREAMS mystream writers 0-0 0-0
1) 1) "mystream"
   2) 1) 1) 1526984818136-0
         2) 1) "duration"
            2) "1532"
            3) "event-id"
            4) "5"
            5) "user-id"
            6) "7782813"
      2) 1) 1526999352406-0
         2) 1) "duration"
            2) "812"
            3) "event-id"
            4) "9"
            5) "user-id"
            6) "388234"
2) 1) "writers"
   2) 1) 1) 1526985676425-0
         2) 1) "name"
            2) "Virginia"
            3) "surname"
            4) "Woolf"
      2) 1) 1526985685298-0
         2) 1) "name"
            2) "Jane"
            3) "surname"
            4) "Austen"

There's also XPENDING which returns it's response in a different format, depending on how it is used.

> XPENDING mystream group55
1) (integer) 1
2) 1526984818136-0
3) 1526984818136-0
4) 1) 1) "consumer-123"
      2) "1"
> XPENDING mystream group55 - + 10
1) 1) 1526984818136-0
   2) "consumer-123"
   3) (integer) 196415
   4) (integer) 1

XCLAIM returns output it the same format as XRANGE unless the JUSTID option is given, in that case only IDs are returned and no key-value pairs.

Proposal

I propose that we add some convenience types / functions for working with Redis Streams either as part of the radix or resp package, or via a new sub package. Having the stream convenience stuff in an external package would probably deter people from using radix

Also having the functionality here could help with performance (e.g. we could reuse the readUint function for reading the IDs without allocating strings).

Ignoring naming (which is hard!) for now, something like this comes to my mind:

type StreamsEntries map[string]Entries // unmarshals output of XREAD/XREADGROUP
func (StreamsEntries) UnmarshalRESP(*bufio.Reader) error

type Entries []Entry // unmarshals output of XRANGE
func (Entries) UnmarshalRESP(*bufio.Reader) error

type Entry struct {
    ID EntryID
    Fields map[string]string
}
func (Entry) UnmarshalRESP(*bufio.Reader) error

type EntryID struct {
    Time uint64
    Sequence uint64
}

func (EntryID) MarshalRESP(*bufio.Writer) error
func (EntryID) UnmarshalRESP(*bufio.Reader) error
func (EntryID) Prev() EntryID
func (EntryID) Next() EntryID
func (EntryID) Before(EntryID) bool // optional, nice-to-have

radix v3 illegally uses internal package

go get github.com/mediocregopher/radix.v3

Returns

go/src/github.com/mediocregopher/radix.v3/stream.go:13:2: use of internal package github.com/mediocregopher/radix/internal/bytesutil not allowed

Cluster sync duration as an option

Hi :) In NewCluster() there is a hardcoded sync duration:

c.syncEvery(30 * time.Second)

How about adding this duration into ClusterOpt?

Make Clients for secondary instances in sentinel and cluster setups available to be used

This was brought up in an issue in v2 (mediocregopher/radix.v2#89), it's useful for those who are trying to distribute work across their secondaries to have access to them via the Sentinel and Cluster clients.

In the case of Sentinel I'm thinking it would have two new methods:

// returns the current primary addr, and current secondary addrs
func (s *Sentinel) Topo() (string, []string)

// may return an error if that addr is no longer known to be a primary or secondary. The returned Client will
// need to have an implicit killswitch so the main Sentinel instance can kill it if it is removed from the 
// topo. Maybe that should be optional?
func (s *Sentinel) Client(addr string) (Client, error)

Cluster only needs one new method, since it already has the Topo method.

// has same concerns as Sentinel's Client method
func (c *Cluster) Client(addr string) (Client, error)

Reading response of FT.SEARCH command of redisearch module

Again more of a question than an issue (is there a different way to ask questions?).

We're using the redisearch module and the response of the search command FT.SEARCH is defined as follows:

Array reply, where the first element is the total number of results, and then pairs of document id, and a nested array of field/value.

I fail to understand which Go slice/map/struct/interface data type i need to hand to the radix.Cmd() for parsing the response into. Any help appreciated and btw, I really like this library.

resp2.Any does not check for resp.Marshaler

Just stumbled over this one, as it bit me while writing some tests.

resp2.Any already handles types implementing resp.Unmarshaler when unmarshalling but does not do the same for types implementing resp.Marshaler when marshalling.

NewEvalScript

A question, not an issue. I don't understand the comment for the example of NewEvalScript(). Does "set as global variable" refer to "within redis" or within the Go application?

// redis command
var getSet = NewEvalScript(1, `
		local prev = redis.call("GET", KEYS[1])
		redis.call("SET", KEYS[1], ARGV[1])
		return prev
`)

In my daemonized application I have a func using NewEvalScript() for the same script over and over. Should I keep the EvalScript which is generated by NewEvalScript() as a global variable so the script is not generated everytime the func is called?

tidy module dependencies

Hi!

Please, take a look at #68

I've tried to go mod tidy your dependencies, and found out that go.mod becomes very "dirty" because of the benchmarks comparison with redigo and redispipe.

You could check the details here - golang/go#30206

So I've decided to clean-up the real lib dependencies from benchmarks.
The only way to do it is to move benchmarks to a separate module.

I've hit 2 problems:

  1. You can't use github.com/joomcode/errorx v0.1.0 // indirect, the code doesn't compile, see joomcode/errorx#17. So I had to pin it to current master.
  2. Problem with NewPool came out - you were using private p.initDone chan. I tried to comment it out, but the tests completely broke. It seems you can't use the Pool without waiting until all connections are initialized. So I've added a dial timeout and a wait inside NewPool func as a workaround - now no private fields are used in tests.

If you plan to fix the async connection creation - I could remove the public PoolDialTimeout option and simply use the default 10 seconds value.
Or you could fix it in future and deprecate the PoolDialTimeout usage.

Incompatible With Glide Package Manager

I'm working on a project which uses Glide for Go package management.

In my glide.yml I have:

- package: github.com/mediocregopher/radix
  version: v3.0.0

When I run glide update I get an error:

[ERROR] Error scanning github.com/mediocregopher/radix/v3/resp: cannot find package "." in:
        /Users/asnyder/.glide/cache/src/https-github.com-mediocregopher-radix/v3/resp
[ERROR] Error scanning github.com/mediocregopher/radix/v3/resp/resp2: cannot find package "." in:
        /Users/asnyder/.glide/cache/src/https-github.com-mediocregopher-radix/v3/resp/resp2

For now I have worked around this by adding these additional lines to my glide.yml:

ignore:
- github.com/mediocregopher/radix/v3/resp
- github.com/mediocregopher/radix/v3/resp/resp2

However, by doing this, I am now unable to use any types from the resp package, which means I can't do any custom marshaling/unmarshaling. If I try using one of these types, I get Go compilation errors. For example, this is what I get when I try to mock a Conn type using gomock:

./cache_impl_test.go:56:12: cannot use a (type *mock_radix.MockConn) as type radix.Conn in argument to arg.Run:
	*mock_radix.MockConn does not implement radix.Conn (wrong type for Decode method)
		have Decode("github.com/lyft/ratelimit/vendor/github.com/mediocregopher/radix/resp".Unmarshaler) error
		want Decode("github.com/mediocregopher/radix/resp".Unmarshaler) error

So it looks like ignoring those resp packages in my glide.yml is not the correct approach. I'm not super familiar with Go's new module system, but I think that's related to the issue here. I was not able to find literature online about Glide + Go modules.

Is there a recommended way of using radix with Glide?

I have also tried these other variations of my glide.yml but to no avail.

- package: github.com/mediocregopher/radix/v3
  version: v3.0.0
- package: github.com/mediocregopher/radix
  version: v3.0.0
  subpackages:
  - v3
- package: github.com/mediocregopher/radix
  version: v3.0.0
  subpackages:
  - v3
  - resp
  - resp/resp2
- package: github.com/mediocregopher/radix
  version: v3.0.0
  subpackages:
  - v3
  - v3/resp
  - v3/resp/resp2

bench result confusing

I run Go go test -v -run=XXX -bench=GetSet -benchmem >/tmp/radix.stat the result is

goos: linux
goarch: amd64
pkg: github.com/mediocregopher/radix
BenchmarkSerialGetSet/radix-32             10000            118389 ns/op              68 B/op          4 allocs/op
BenchmarkSerialGetSet/redigo-32            10000            101131 ns/op              86 B/op          5 allocs/op
BenchmarkSerialGetSet/redispipe-32          3000            577234 ns/op             169 B/op          8 allocs/op
BenchmarkSerialGetSet/redispipe_pause0-32                  10000            140855 ns/op             168 B/op          8 allocs/op
BenchmarkParallelGetSet/radix/no_pipelining-32             30000             41502 ns/op             302 B/op          8 allocs/op
BenchmarkParallelGetSet/radix/one_pipeline-32             100000             19035 ns/op             113 B/op          4 allocs/op
BenchmarkParallelGetSet/radix/default-32                  100000             14458 ns/op             155 B/op          4 allocs/op
BenchmarkParallelGetSet/redigo-32                          10000           3131131 ns/op            7998 B/op         28 allocs/op
PASS
ok      github.com/mediocregopher/radix 42.769s

While the result in your repo is

# go test -v -run=XXX -bench=GetSet -benchmem >/tmp/radix.stat
# benchstat radix.stat
name                                     time/op
SerialGetSet/radix                         89.1µs ± 7%
SerialGetSet/redigo                        87.3µs ± 7%
ParallelGetSet/radix/default-8             5.47µs ± 2%  <--- The good stuff
ParallelGetSet/redigo-8                    27.6µs ± 2%
ParallelGetSet/redispipe-8                 4.16µs ± 3%

name                                      alloc/op
SerialGetSet/radix                          67.0B ± 0%
SerialGetSet/redigo                         86.0B ± 0%
ParallelGetSet/radix/default-8              73.0B ± 0%
ParallelGetSet/redigo-8                      138B ± 4%
ParallelGetSet/redispipe-8                   168B ± 0%

name                                       allocs/op
SerialGetSet/radix                           4.00 ± 0%
SerialGetSet/redigo                          5.00 ± 0%
ParallelGetSet/radix/default-8               4.00 ± 0%
ParallelGetSet/redigo-8                      6.00 ± 0%
ParallelGetSet/redispipe-8                   8.00 ± 0%

Why my test results shows Go BenchmarkParallelGetSet/redigo-32 (3131131 ns/op) is much worse than Go BenchmarkParallelGetSet/radix/no_pipelining-32(41502 ns/op) ????

And your bench result doesn't list the results of radix using pipelining.

Frequent flushing with pipelines

The implementation of Pipeline calls Encode for each command in the pipeline which can be quiet slow.

The problem is that connWrap.Encode flushes the underlying bufio.Writer with each call. Calling Flush with each call is the right behavior in most cases, but can really hurt for pipelines. For one of my local tools the calls to Flush take up ~40% (1.8 seconds!) of the whole pipeline processing time.

I currently use a copy of Pipeline/pipeline that implements the resp.Marshaler interface and Encodes itself, like this:

func (p pipeline) Run(c radix.Conn) error {
	if err := c.Encode(p); err != nil {
		return err
	}
	for _, cmd := range p {
		if err := c.Decode(cmd); err != nil {
			return err
		}
	}
	return nil
}

func (p pipeline) MarshalRESP(w io.Writer) error {
	for _, cmd := range p {
		if err := cmd.MarshalRESP(w); err != nil {
			return err
		}
	}

	return nil
}

This way we only need 1 manual call to Flush, making use of the internal flushing of the underlying bufio.Writer for larger pipelines.

@mediocregopher Any opinions on this approach?

about hmset

// demo
type User struct {
    Id         int32  `json:"id"`
}
data := User{
    32, 
}  // 
m := structs.Map(data)

RedisPool.Do(radix.FlatCmd(nil, "HMSET", redisKey, m))


a problem : Key in redis is uppercase

Is there a way to let redis's key lowercase ?

Documentation example for MULTI...EXEC

I was hoping to get an example or two (and add it to the docs) for MULTI--EXEC and best practice.

I am using a WithConn block to try MULTI and EXEC. I'm wondering what the best approach is, since conn.Do could return an error at any point along the way? If there's an error in the middle of a MULTI block and I returned with an error, wouldn't the MULTI never get rolled back (eg. say the Client runs out of pool connections or the system hits the file descriptor limit)?

contrived example:

poolOrClusterClient.Do(radix.WithConn(key, func(conn radix.Conn) error {
        var err error
        if err = conn.Do(radix.Cmd(nil, "MULTI")); err != nil {
            return err
        }
       var rc int
        if err = conn.Do(radix.Cmd(&rc, "EXISTS", key)); err == nil {
            if err := conn.Do(radix.Cmd(nil, "DEL", key)); err != nil {
                  // ???
            }
        }
        if err = conn.Do(radix.Cmd(nil, "EXEC")); err != nil {
            return err
        }
        return nil
}))

Stub Blocks When Running Pipeline Action

The radix.Stub Conn works fine when its Do method is called with a CmdAction argument. However, if you call its Do with a pipeline ([]CmdAction), it blocks indefinitely.

The following code runs and exits correctly:

stub := radix.Stub("", "", func(args []string) interface{} {
	return nil
})
stub.Do(radix.Cmd(nil, "PING"))
stub.Do(radix.Cmd(nil, "PING"))

The following code blocks on the call to Do and does not exit:

stub := radix.Stub("", "", func(args []string) interface{} {
	return nil
})
stub.Do(radix.Pipeline(radix.Cmd(nil, "PING"), radix.Cmd(nil, "PING")))

I think this has to do with how Stub's Decode method blocks if its buffer is empty.

*Pool.doRefill() isn't thread-safe but it should

The code section:

func (sp *Pool) doRefill() {
	if len(sp.pool) == cap(sp.pool) {
		return
	}
	spc, err := sp.newConn()
	if err == nil {
		sp.put(spc)
	}
}

The issue:
After the check is passed, sp.pool can be changed in other radix-goroutines (through Do(), atIntervalDo()). It can cause creating of redundant connections.
len() and cap() with channels - isn't the best idea. It's not acceptable for production :(
Possible solutions:

  1. Replace channels with an collection (like queue, slice) and a mutex to lock it before the comparison.
    Go channels aren't so good:
    https://www.jtolio.com/2016/03/go-channels-are-bad-and-you-should-feel-bad/
    https://medium.com/@robot_dreams/goroutines-and-channels-arent-free-a8684f3b6560
  2. ???

can't connection redis-cluster

the problem is :

192.168.30.70 is a ip on the physical machine

the virtual machine ip is 10.20.80.132 on the 192.168.30.70

one:
i use the 10.20.80.132 install docker,and the redis-cluster on the docker use the --net=host

two:
i use the 10.20.80.132 install redis cluster not use the docker

one and two both not connection
error connecting to 192.168.30.70:7000: dial tcp 192.168.30.70:7000: i/o timeout

pingSpin

t := time.NewTicker(10 * time.Second / time.Duration(cap(sp.pool)))

If the number of conns is too many, the PING is too frequent.

Pool could provider more connections more than pool size

Following source is get method of pool.go.
According to "default: return sp.newConn()", Does it mean if there is no sufficient connection in pool channel, it will temporarily create new connection. The connections that can be use, are more than pool size specified.

pool.go

func (sp *Pool) get() (*staticPoolConn, error) {
	sp.l.RLock()
	defer sp.l.RUnlock()
	if sp.closed {
		return nil, errClientClosed
	}

	select {
	case spc := <-sp.pool:
		return spc, nil
	default:
		return sp.newConn()
	}
}

*Pool.doRefill() creates redundant connections while doing commands

Radix.v3 creates new connections and closes them frequently because of some connections are taken temporally from pool to do commands.

Example with interval ping:
The scheme of the example: https://drive.google.com/file/d/1NczXcPK0k1FCqeHz6nZsB-2dBe3q5brZ/view?usp=sharing

  1. atIntervalDo() runs sp.Do(Cmd(nil, "PING"));
  2. In the same time (different goroutines):
    • *Pool.Do() gets an connection and do the ping-command;
    • atIntervalDo() runs sp.doRefill();
  3. *Pool.doRefill() detects insufficiency of connections in the pool (1 conn is taken to ping);
  4. In the same time (different goroutines):
    • the ping-command ends, *Pool.Do() returns the gotten connection to the pool (the pool is full again);
    • *Pool.doRefill() creates new connection 'cause of point 3;
  5. *Pool.doRefill() tries to put the created connection from 4.2 to the pool, but it's full. It tries to put the connection to overflow. When the overflow is full too - it closes connections;
  6. Repeat.

How to reproduce:
main.go:

package main

import (
	"fmt"
	"log"
	"github.com/mediocregopher/radix.v3"
	"time"
	"strconv"
)

func redisConnSelectDb(db int) func(network, addr string) (radix.Conn, error) {
	dbStr := strconv.Itoa(db)
	return func(network, addr string) (radix.Conn, error) {
		conn, err := radix.DialTimeout(network, addr, 1*time.Minute)
		if err != nil {
			return nil, err
		}
		if err := conn.Do(radix.Cmd(nil, "SELECT", dbStr)); err != nil {
			conn.Close()
			return nil, err
		}
		return conn, nil
	}
}

func main() {
	conf, err := LoadConfig("config.json", "config-custom.json")
	if err != nil {
		log.Fatal(err)
	}

	redisAddress := fmt.Sprintf("%s:%d", conf.Redis.Host, conf.Redis.Port)
	pool0, err := radix.NewPool("tcp", redisAddress, 20, radix.PoolConnFunc(redisConnSelectDb(1)))
	if err != nil {
		log.Fatal(err)
	}

	for {
		fmt.Printf("1: %d\n", pool0.NumAvailConns() /*, pool1.NumAvailConns(), pool2.NumAvailConns()*/)
		time.Sleep(1 * time.Second)
	}
}

pool.go (lines 362-379):

func (sp *Pool) put(spc *staticPoolConn) {
	sp.l.RLock()
	defer sp.l.RUnlock()
	if spc.lastIOErr != nil || sp.closed {
		spc.Close()
		return
	}

	select {
	case sp.pool <- spc:
	default:
		select {
		case sp.overflow <- spc:
		default:
			spc.Close()
			fmt.Print("a connection closed as redundant") //TODO: Remove after debug!!!!!!!!!!!!!!!!!
		}
	}
}

Run and wait some time for random synchronization.
Please, fix it.

Avoid WithConn when possible with Redis Cluster

Calling Do on a *Cluster currently always wraps the Action with WithConn.

https://github.com/mediocregopher/radix.v3/blob/6501e39a206c4e65228f6a05aac97d1fdd66d1ef/cluster.go#L332-L345

From my understanding of the code and the Redis Cluster protocol, the WithConn is only necessary when also sending the ASKING command. That means for most calls the WithConn does basically nothing and can be skipped unless ask == true.

Although calling WithConn does not hurt correctness, it still hurts allocations quite a bit, which can be seen when changing doInner to only call WithConn when ask == true.

Since I'm not that familiar with the Redis Cluster protocol / logic I wanted to double check that my understanding here is correct, before making a change.

Here's a diff that makes the WithConn call optional and adds a benchmark for the Cluster.Do implementation.

diff --git a/cluster.go b/cluster.go
index 4fb8c08..1a93f8e 100644
--- a/cluster.go
+++ b/cluster.go
@@ -323,14 +323,17 @@ func (c *Cluster) doInner(a Action, addr, key string, ask bool, attempts int) er
                return err
        }
 
-       err = p.Do(WithConn(key, func(conn Conn) error {
-               if ask {
+       if ask {
+               err = p.Do(WithConn(key, func(conn Conn) error {
                        if err := conn.Do(Cmd(nil, "ASKING")); err != nil {
                                return err
                        }
-               }
-               return conn.Do(a)
-       }))
+
+                       return conn.Do(a)
+               }))
+       } else {
+               err = p.Do(a)
+       }
 
        if err == nil {
                return nil
diff --git a/cluster_test.go b/cluster_test.go
index 6c84ffe..94365c5 100644
--- a/cluster_test.go
+++ b/cluster_test.go
@@ -136,6 +136,19 @@ func TestClusterDo(t *T) {
        }
 }
 
+func BenchmarkClusterDo(b *B) {
+       c, _ := newTestCluster()
+
+       k, v := clusterSlotKeys[0], randStr()
+       require.Nil(b, c.Do(Cmd(nil, "SET", k, v)))
+
+       b.ResetTimer()
+
+       for i := 0; i < b.N; i++ {
+               require.Nil(b, c.Do(Cmd(nil, "GET", k)))
+       }
+}
+
 func TestClusterWithPrimaries(t *T) {
        c, _ := newTestCluster()
        var topo ClusterTopo

Benchmark results with (benchstat):

name         old time/op    new time/op    delta
ClusterDo-8    5.59µs ± 1%    5.60µs ± 3%     ~     (p=0.434 n=9+10)

name         old alloc/op   new alloc/op   delta
ClusterDo-8    4.85kB ± 0%    4.78kB ± 0%   -1.37%  (p=0.000 n=9+9)

name         old allocs/op  new allocs/op  delta
ClusterDo-8      20.0 ± 0%      17.0 ± 0%  -15.00%  (p=0.000 n=10+10)

Using a real Redis Cluster the allocation difference is the same (the total number of allocations is much lower in both cases, but there are still 3 allocations saved).

Connecting to a redis schema url

I'm trying to connect to a redis schema URL, but I keep getting error. Is there any documentation or help on this?

Sample url:
redis://h:[email protected]:38799

And this is the code I use to connect:

        fmt.Println("running")
	client, err := radix.NewPool("tcp", "redis://h:[email protected]:38799", 10)
	if err != nil {
		// handle error
	}

	var fooVal string
	err = client.Do(radix.Cmd(&fooVal, "SET", "foo", "hello"))
	fmt.Println(err, fooVal)

I've also tried to do [redis://h:[email protected]]:38799 and use it.

This is the error:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x4f2b7e]

goroutine 1 [running]:
github.com/mediocregopher/radix%2ev3.(*Pool).getExisting(0x0, 0x0, 0x0, 0x0)
	/home/aks/go/src/github.com/mediocregopher/radix.v3/pool.go:365 +0x4e
github.com/mediocregopher/radix%2ev3.(*Pool).get(0x0, 0x40aa78, 0x51afe0, 0x525120)
	/home/aks/go/src/github.com/mediocregopher/radix.v3/pool.go:403 +0x2f
github.com/mediocregopher/radix%2ev3.(*Pool).Do(0x0, 0x7f88d84f70c0, 0xc0000c2000, 0x0, 0x0)
	/home/aks/go/src/github.com/mediocregopher/radix.v3/pool.go:440 +0x37
main.main()
	/home/aks/hello.go:17 +0x19e
exit status 2

panic: send on closed channel

I encountered this after closing a pool as part of cleanup for an integration test. I think you need to acquire the mutex before sending the PING, in case of a race between that timer and the channel being closed?

goroutine 1737 [running]:
github.com/mediocregopher/radix/v3.(*pipeliner).Do(0xc0000d6720, 0xd16900, 0xc00042e3f0, 0xbc2c01, 0xc02b20)
	/build/src/.../vendor/github.com/mediocregopher/radix/v3/pipeliner.go:102 +0x161
github.com/mediocregopher/radix/v3.(*Pool).Do(0xc000246000, 0xd16900, 0xc00042e3f0, 0x0, 0x0)
	/build/src/.../vendor/github.com/mediocregopher/radix/v3/pool.go:497 +0x1b8
github.com/mediocregopher/radix/v3.NewPool.func2()
	/build/src/.../vendor/github.com/mediocregopher/radix/v3/pool.go:309 +0xb7
github.com/mediocregopher/radix/v3.(*Pool).atIntervalDo.func1(0xc000246000, 0x2f362b6, 0xc0002ddb60)
	/build/src/.../vendor/github.com/mediocregopher/radix/v3/pool.go:354 +0xc5
created by github.com/mediocregopher/radix/v3.(*Pool).atIntervalDo
	/build/src/.../vendor/github.com/mediocregopher/radix/v3/pool.go:347 +0x85

Sentinel Client method leaks secondary pools

Disclaimer: This is all theoretical and I haven't tested it. I only looked at the code and stepped through this scenario in my head.

The Sentinel.Client method says

NOTE the Client should not be closed.

This can be a problem since Clients to secondaries are created by calling the SentinelPoolFunc, which is primarily used for creating a long lived pool for connections to the primary and has the same configuration for primaries and secondaries.

If the returned pool for one of the secondaries is not closed, the pool can leak connections (at least for some time, depending on the pool configuration/implementation). But users don't know this and are told to not close the returned Client.

Note that for the case where an user wants a client for the primary they really must not call Close.

This doesn't seem like a big problem to me, since I don't expect many people to manually use connections to secondaries, but is still worth fixing.

I thought about two ways to solve this, but there are probably others:

  1. Save (and reuse) the non-primary pools and close them when necessary (on Master change).
  2. Always return a fresh Client, using the SentinelConnFunc, even for the primary and tell users to manually close the connections.

How to pipeline multiple LUA scripts ?

They EvalScript.Cmd implements Action but Pipeline accepts CmdAction.
for idx := range backlogs { ....... redisCommands = append( redisCommands, LUA_SET_ANSWER.Cmd( &results[idx], rkGameActives, rkUserID, rkQuestionLog, rkChoiceID, ).(radix.CmdAction), ) }

Broken govendor/dep.. vendor support

@mediocregopher I appreciate that you target to new modules feature in golang, but imports with /v3/ in path broke support for old GOPATH way (try using govendor/dep - it is not working anymore). Please consider some backward compatibility, because some people are not ready for changing dependencies managment system (and don't want to change library, because radix fits perfectly our needs :-) )

Timeout logic needs auditing

This got brought up in #51. The current behavior across the package wrt connection timeouts and pinging needs to be looked at and normalized:

  • Dial does not have a read timeout by default, which poses a hazard for anyone connecting to redis outside of an internal network (or inside a shitty internal network). A network partition could cause an existing connection to lose all packets, and radix will block until the kernel closes the tcp connection (which can be a while).

  • Ensure TCP keepalive is being used (if it's not the default).

  • PersistentPubSub doesn't currently call PING on its connection automatically. That should be remedied. Also, PubSub doesn't do anything automatically either, but maybe that's fine? If it is it should at least be documented that Ping should be called periodically.

  • timeoutOk isn't actually needed at all, since there's no way for PubSub to get a Conn from a Pool. It can be deleted.

  • In general audit the default timeout values on both connection and application level. A default TCP level timeout of 5 seconds, and a corresponding application level timeout of 10 seconds, would be fine for most use-cases I think.

missing "v3" related while installing

hello,
when installing, I meet the problem below:

$ go get github.com/mediocregopher/radix
package github.com/mediocregopher/radix/v3/internal/bytesutil: cannot find package "github.com/mediocregopher/radix/v3/internal/bytesutil" in any of:
	/usr/local/go/src/github.com/mediocregopher/radix/v3/internal/bytesutil (from $GOROOT)
	/home/ictar/go_workspace/src/github.com/mediocregopher/radix/v3/internal/bytesutil (from $GOPATH)
package github.com/mediocregopher/radix/v3/resp: cannot find package "github.com/mediocregopher/radix/v3/resp" in any of:
	/usr/local/go/src/github.com/mediocregopher/radix/v3/resp (from $GOROOT)
	/home/ictar/go_workspace/src/github.com/mediocregopher/radix/v3/resp (from $GOPATH)
package github.com/mediocregopher/radix/v3/resp/resp2: cannot find package "github.com/mediocregopher/radix/v3/resp/resp2" in any of:
	/usr/local/go/src/github.com/mediocregopher/radix/v3/resp/resp2 (from $GOROOT)
	/home/ictar/go_workspace/src/github.com/mediocregopher/radix/v3/resp/resp2 (from $GOPATH)

is there anyone could help me to find v3 folder?

PubSub not writing to channel

The Example code for PubSub in GoDoc does not write to a channel when there are a separate publisher and subscriber. The GoDoc example runs correctly when on one go program. My Code is below.

publisher.go

package main

import (
	"github.com/mediocregopher/radix.v3"
	"time"
	"github.com/satori/go.uuid"
	"log"
)

func main(){
	stub, stubCh := radix.PubSubStub("tcp", "localhost:6379", func([]string) interface{} {
		return nil
	})

	pstub := radix.PubSub(stub)
	if pstub.Ping() != nil {
		log.Fatal(pstub.Ping())
	}

	for {
		stubCh <- radix.PubSubMessage{
			Channel: "foo",
			Message: []byte(uuid.NewV4().String()),
		}
		time.Sleep(1 * time.Second)
	}
}

subscriber.go

package main

import (
	"github.com/mediocregopher/radix.v3"
	"log"
)

func main(){
	stub, _ := radix.PubSubStub("tcp", "localhost:6379", func([]string) interface{} {
		return nil
	})

	// Use PubSub to wrap the stub like we would for a normal redis connection
	pstub := radix.PubSub(stub)

	// Subscribe msgCh to "foo"
	msgCh := make(chan radix.PubSubMessage)
	if err := pstub.Subscribe(msgCh, "foo"); err != nil {
		log.Fatal(err)
	}

	// now msgCh is subscribed the publishes being made by the go-routine above
	// will start being written to it
	for m := range msgCh {
		log.Printf("%v", string(m.Message))
	}
}

Error handling for empty response

The resp object is being evaluated even if the key was not found. Is there a way to get a not found error instead before type asserting the interface?

Can't differentiate between GET with 0 or (nil) return value

Hi! Really enjoying integrating radix in our API framework. Sorry if this is a dumb question.

If a key does not exist, err will be nil, but (due to the int64 type below), the return value from the redis call is 0 (redis in fact will have returned nil):

    err := client.Do(radix.Cmd(&b, "EXISTS", key))
    fmt.Printf("Exists '%t' err is '%v'", b, err)

    var expires int64
    err = client.Do(radix.Cmd(&expires, "GET", key))
    if err != nil {
        fmt.Printf("GET %s failed", key)
        return false, 0, err
    }

The Redis spec allows for GET to return a (nil) special type in the case where a key doesn't exist. How does radix.v3 allow developers to detect the difference? Do I have to call EXISTS along with GET inside a Pipeline?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.