Giter VIP home page Giter VIP logo

groupcache's People

Contributors

adg avatar bmizerany avatar boulos avatar bradfitz avatar codelingobot avatar dakerfp avatar desimone avatar dgryski avatar dsnet avatar edwardbetts avatar elimisteve avatar fumin avatar haraldnordgren avatar kevinburke avatar lorneli avatar luciferous avatar luit avatar maruhyl avatar mdentonskyport avatar nf avatar pierrre avatar ryanslade avatar shawnps avatar tippjammer avatar two avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

groupcache's Issues

defaultReplicas = 3 causes badly distributed hash rings

Currently the hash ring has 3 replicas per node, that can cause statistical imbalances of key distribution, especially with low number of machines.

The number of replicas should be configurable. For example, a similar Python hash ring library has 40 replicas per node. From tests I made, this is indeed the vicinity of the replicas number where the distribution stabilizes, regardless of the number of nodes.

I made a few benchmarks directly on the consistent hash to illustrate this (I can post the code to generate this if you want, it uses random "ip addrsses" and keys). The bars are the number of random keys out of 10k keys that mapped to each random node. all tests on the same number of nodes were made with the same node "ips".

2 Nodes X  3 Replicas:
    36.75% | ####################################
    63.25% | ###############################################################

2 Nodes X  33 Replicas:
    49.77% | #################################################
    50.23% | ##################################################

3 Nodes X  3 Replicas:
    23.05% | #######################
    33.45% | #################################
    43.50% | ###########################################

3 Nodes X  33 Replicas:
    31.50% | ###############################
    31.14% | ###############################
    37.36% | #####################################

7 Nodes X  3 Replicas:
    30.08% | ##############################
    11.24% | ###########
    6.44% | ######
    7.29% | #######
    27.42% | ###########################
    9.26% | #########
    8.27% | ########

7 Nodes X  43 Replicas:
    13.53% | #############
    16.00% | ################
    16.92% | ################
    10.31% | ##########
    14.78% | ##############
    12.74% | ############
    15.72% | ###############

README.md - presentation link

Hi Patrick - the presentation link (nor the domain of the link) don't seem to be working.
Heads up, if it's a typo in the link.

A bug in getting from peers

Reproduce steps

pool := groupcache.NewHTTPPool("http://127.0.0.1:"+os.Getenv("PORT"))
cache := groupcache.NewGroup("cacher", 64<<20, someGetterFunc)
pool.Set("http://127.0.0.1:50000", "http://127.0.0.1:50001")

and run it in two port :50000 and :50001
Now, if use /xxx as a key, then the groupcacher will act like there is no any peer.
i.e. It will only get data from locally, instead of getting from peers.

The problem is the slash in the head of the key (i.e. if the key is xxx then everything goes right), and I still can't figure out why.

Support whether the returned value was cached or not

There is no way to know if a returned value was the result of the call or a cached result? This is useful to know in certain circumstances.

Something like:

// SomeMethod ... and also returns whether the value was cached result or not
func (g *Group) SomeMethod(key string, fn func() (interface{}, error)) (interface{}, bool, error) {

In the case of cached lookup the returned bool is true.

A bug in sink.go

func (s *stringSink) SetProto(m proto.Message) error {
    b, err := proto.Marshal(m)
    if err != nil {
        return err
    }
        // Should clear the old string
        s.v.s = ""
    s.v.b = b
    *s.sp = string(b)
    return nil
}

As the comment says, if I call *stringSink.SetString first and then call *stringSink.SetProto, the stringSink.v will be messed up.

lru cache is not thread-safe

In lru.Add:

// Add adds a value to the cache.
func (c *Cache) Add(key Key, value interface{}) {
	if c.cache == nil {
		c.cache = make(map[interface{}]*list.Element)
		c.ll = list.New()
	}
	if ee, ok := c.cache[key]; ok {
		c.ll.MoveToFront(ee)
		ee.Value.(*entry).value = value
		return
	}
	ele := c.ll.PushFront(&entry{key, value})
	c.cache[key] = ele
	if c.MaxEntries != 0 && c.ll.Len() > c.MaxEntries {
		c.RemoveOldest()
	}
}

c.cache[key] = ele is not thread-safe. Is it meant to be?

how to emulate Expiration time

I noticed in the design decision Setting and expiration time as been left out. I imagine that this is because you have an idea on how to emulate this feature.

The use case I have in mind is building a cached reverse proxy in front a web app. I know that pages should be cache for X min.

Could you please elaborate a bit on how you would approach this ?

(Not sure this is the right place to ask question.)

trouble with upper bounds of memory limits

Hi,

Thanks for group cache, we are using it with almost great success.

One thing I would like to understand more is how to estimate / know the upper bounds of an applications memory needs that's primary functionality is simply to serve things from group cache.

So we have 6GB vm's running group cache, and our first inclination was to set the available cache size to 5gb, leaving 1gb free for the os and things.

Right away this crashed under production load due to OOM killer being invoked. So we ended up turning this number down to just under 2GB so that the memory the process is using stays under the 6GB before OOM killer kicks in.

So the next step was to profile the memory, and low and behold the heap size does not far exceed the 2GB that we have configured in group cache. But the process ends up using 4-6gb of ram.

So the latest attempt was to manually call debug.FreeOSMemory() every couple minutes, and when this is run, shorty after, the amount of memory the process is requiring drops, much is returned to the OS.

However, we still have occasional crashes due to OOM killer. We added SSD swap to buffer this case but after 48 hours of no problems there was a blip (substantial increase) in traffic to these machines causing a single one to get OOM killed, which then snowballed into several others that had this happen.

So, to make this work we could drop the 2GB cache setting (aka 2nd parameter in groupcache.NewGroup) to even lower lip 1GB, but it seems a bit silly to have a 6GB vm that can only use 1gb for cache.

Is this just a downside of using go's approach to memory management for caching?

Not sure if it matters, but our use case is very similar to dl.google.com. We are serving downloads of medium sized files (50mb-1gb, cached in 100MB chunks), and group cache is used to front a slower more expensive api that has the files we need to serve. So naturally when finding this it seems like a great solution to the issue.

We would be extremely grateful for any tips you could share to manage this type of issue. I keep thinking there is something I am missing.

Thanks for any insight you can share.

  • scott

http pool basePath getter and/or setter?

I'm building a server that uses groupcache, and I would like to be able to conveniently wrap the http pool's handler inside a mux (specifically to add a monitoring status call on the server).

I simply thought of adding a handler func that forwards everything under /_groupcache/ to groupcache's handler. But this will fail if ever the hard coded base path will change.

So I thought of adding at least a getter, if not a setter for the base path. I understand the consistency problem with adding a setter. Will a pull request for GetBasePath() and/or SetBasePath() be accepted?

concurrency problem in singleflight

I found the code beflow in singleflight.

func (g *Group) Do(key string, fn func() (interface{}, error)) (interface{}, error) {
    g.mu.Lock()
    if g.m == nil {
	    g.m = make(map[string]*call)
    }
    if c, ok := g.m[key]; ok {
	    g.mu.Unlock()
	    c.wg.Wait()
	    return c.val, c.err
    }
    c := new(call)
    c.wg.Add(1)
    g.m[key] = c
    g.mu.Unlock()

    c.val, c.err = fn()
    c.wg.Done()

    g.mu.Lock()
    delete(g.m, key)
    g.mu.Unlock()

    return c.val, c.err
}

In most circumstances, the code should perform well and meet concurrency challenges.

However, is there any possibility that c.wg.Wait() happens before delete(g.m, key) (so that the key exists, rather deleted) and after c.wg.Done()? If so, there might be some concurrency problems. To avoid the problem, the c.wg.Done() should also be protected by the mu lock.

a bug exist in http.go?

func (p *HTTPPool) PickPeer(key string) (ProtoGetter, bool) {
// TODO: make checksum implementation pluggable
h := crc32.Checksum([]byte(key), crc32.IEEETable)
p.mu.Lock()
defer p.mu.Unlock()
if len(p.peers) == 0 {
return nil, false
}
n := int(h)
if n < 0 {
n *= -1
}
if peer := p.peers[n%len(p.peers)]; peer != p.self {
// TODO: pre-build a slice of *httpGetter when Set()
// is called to avoid these two allocations.
return &httpGetter{p.Transport, peer + p.basePath}, true
}
return nil, false
}

if use 32bit OS( -1 * 2^31 <= int <= 2^31 - 1), and h = 2147483648, n still is negative(-2147483648), even if multiply by -1, at last may will get a runtime error: index out of range

lru: panic: interface conversion: interface {} is nil, not *lru.entry

I'm occasionally getting this panic when running Add():

http: panic serving 127.0.0.1:37287: interface conversion: interface {} is nil, not *lru.entry
goroutine 6797289 [running]:
    net/http.(*conn).serve.func1(0xc8207ee8f0, 0x7fb61f0370e0, 0xc820020510)
        #011/usr/local/go/src/net/http/server.go:1287 +0xb5
    github.com/golang/groupcache/lru.(*Cache).removeElement(0xc820152500, 0xc82011f980)
        #011/Users/jb/src/github.com/syncthing/discosrv/Godeps/_workspace/src/github.com/golang/groupcache/lru/lru.go:108 +0x1a2
    github.com/golang/groupcache/lru.(*Cache).RemoveOldest(0xc820152500)
        #011/Users/jb/src/github.com/syncthing/discosrv/Godeps/_workspace/src/github.com/golang/groupcache/lru/lru.go:102 +0x4d
    github.com/golang/groupcache/lru.(*Cache).Add(0xc820152500, 0x8c3dc0, 0xc820aa2930, 0x9f4220, 0xc8207ff9a0)
        #011/Users/jb/src/github.com/syncthing/discosrv/Godeps/_workspace/src/github.com/golang/groupcache/lru/lru.go:69 +0x2d9
    main.(*querysrv).limit(0xc820076e60, 0xc820aa2920, 0x10, 0x10, 0x10)
        #011/Users/jb/src/github.com/syncthing/discosrv/querysrv.go:308 +0x207
    main.(*querysrv).handler(0xc820076e60, 0x7fb61f038ab8, 0xc8207ee9a0, 0xc8209a9260)
        #011/Users/jb/src/github.com/syncthing/discosrv/querysrv.go:107 +0x20d
    main.(*querysrv).(main.handler)-fm(0x7fb61f038ab8, 0xc8207ee9a0, 0xc8209a9260)
        #011/Users/jb/src/github.com/syncthing/discosrv/querysrv.go:75 +0x3e
    net/http.HandlerFunc.ServeHTTP(0xc820144920, 0x7fb61f038ab8, 0xc8207ee9a0, 0xc8209a9260)
        #011/usr/local/go/src/net/http/server.go:1422 +0x3a
    net/http.(*ServeMux).ServeHTTP(0xc820053290, 0x7fb61f038ab8, 0xc8207ee9a0, 0xc8209a9260)
        #011/usr/local/go/src/net/http/server.go:1699 +0x17d
    net/http.serverHandler.ServeHTTP(0xc8200135c0, 0x7fb61f038ab8, 0xc8207ee9a0, 0xc8209a9260)
        #011/usr/local/go/src/net/http/server.go:1862 +0x19e
    net/http.(*conn).serve(0xc8207ee8f0)
        #011/usr/local/go/src/net/http/server.go:1361 +0xbee
    created by net/http.(*Server).Serve
        #011/usr/local/go/src/net/http/server.go:1910 +0x3f6

The code calling this is https://github.com/syncthing/discosrv/blob/master/querysrv.go#L308. It seems to start happen when the process has been running for a while. Once it starts happening, it's regular, possibly every Add() call (I'm not sure, sorry).

Group based PeerPicker

Hi,

This is a question. The newGroup function is private
func newGroup(name string, cacheBytes int64, getter Getter, peers PeerPicker) *Group

That means I can't pass my own PeerPicker when creating a group. As such, the default HttpPool's PeerPicker is used. Why is this restriction?

I may not understand groupchache correctly. If so correct me. My understanding is the default PeerPicker is portPicker in HttpPool. It uses consistenthash.Map to hash a string into groupcache peers, whether or not a particular peer has created a particular group or not. Suppose I have three processes running groupcache: p1, p2, and p3. And I have two groups: g1, g2. Assume we have the following peer to group mapping:
p1: g1, g2
p2: g2, g3
p3: g1, g3

If in p2 if I call

g2 := groupcache.NewGroup("g2", 10<<20, g2Getter)
g2.Get(ctx, "foo", groupcache.StringSink(&s))

and if "foo" is hashed to p3 where there is no group "g2" will that result in "foo" never cached?

I am adding peer/group auto discover with zookeeper, where grouphook will add itself to zk. Then a group based PeerPicker can be written to only hash the key to those peers that have the group. However, since newGroup with user defined picker is private, I can't find a way to set my own group based PeerPicker.

Thanks,

-John

Concurrent access to lru.Cache

It doesn't look like lru.Cache is safe for concurrent access. Is this correct? If so, would you be opposed to me adding in a mutex?

I'm new to open source contribution and this looks like it might be an easy win.

Spread peers updates

Hello!
First of all, thanks a lot for this project.

My issue is:
Is it possible to spread updates of the peers list initiated in one node to others automatically?
You have this implementation

func (p *HTTPPool) Set(peers ...string) {
    p.mu.Lock()
    defer p.mu.Unlock()
    p.peers = consistenthash.New(defaultReplicas, nil)
    p.peers.Add(peers...)
    p.httpGetters = make(map[string]*httpGetter, len(peers))
    for _, peer := range peers {
        p.httpGetters[peer] = &httpGetter{transport: p.Transport, baseURL: peer + p.basePath}
    }
}

I don't see anything to make it happens in code above.
Or is it inconsistent with goals of the groupcache project? And if so why?
Thanks!

Citation needed

Can we get some citation for the readme statement:

comes with a cache filling mechanism. Whereas memcached just says "Sorry, cache miss", often resulting in a thundering herd of database (or whatever) loads from an unbounded number of clients (which has resulted in several fun outages), groupcache coordinates cache fills such that only one load in one process of an entire replicated set of processes populates the cache, then multiplexes the loaded value to all callers.

Specifically: (which has resulted in several fun outages)

Need a function clear all caches

I have many html files, want to be cached in groupcache.
But, these files will be changed sometimes , not frequently.
If i watching the folder, when a file be changed, i will get a notice.
Then clear all the caches in gourpcache, and do not need to restart my service.

Support for destruction of group

Currently only supports creating a new group and getting an existing group, is it possible to provide a way to destroy an existing group?

Exposing the OnEviction callback

Could the OnEviction callback in the lre cache be exposed by groupcache? Is there a reason this isn't exposed that may not be obvious?

Context not being passed to peers?

Hi,

I'm using groupcache to cache mysql query results. The query has several parameters that would be weird to pass as part of the key, so i pass them as part of the ctx.

But that only works in the "local" peer, other peers get a "nil" context so i cant make the query.

I don't know if ctx is not to be used like that or if this might be a bug. For now i will be forced to make that context part of the key and recreate the context by parsing the key.

Thanks for your help :)

Question about removing a broken node from group

A group's peers remain unchanged after initialization.(So consistenthash package doesn't provide Remove method) But node may become unavailable because of network, its hardware or something else.

Why groupcache doesn't support removing a broken node?

I guess the reason is that another node could reuse broken one's network address(ip:port), and replace it. But I figured I would ask just in case I'm wrong.

Thanks in advance.

Fails when building app with gccgo

Building like this:

go build -compiler gccgo -o myapp .

Yields no binary but an error:

github.com/golang/groupcache

../../golang/groupcache/groupcache.go:197:18: error: argument 1 has incompatible type (different receiver types)
g.peersOnce.Do(g.initPeers)
^

I have not looked at it in any way beyond getting the message.
Using gc works fine.

I am on a freshly updated Archlinux with gccgo --version:
gccgo (GCC) 4.8.1

overflow issue in http.go

func (p *HTTPPool) PickPeer(key string) (ProtoGetter, bool) {
    // TODO: make checksum implementation pluggable
    h := crc32.Checksum([]byte(key), crc32.IEEETable)
    p.mu.Lock()
    defer p.mu.Unlock()
    if len(p.peers) == 0 {
        return nil, false
    }
    if peer := p.peers[int(h)%len(p.peers)]; peer != p.self {
        // TODO: pre-build a slice of *httpGetter when Set()
        // is called to avoid these two allocations.
        return &httpGetter{p.Transport, peer + p.basePath}, true
    }
    return nil, false
}

crc32.Checksum return uint32, the int(h) maybe overflow ?

Make cache an interface?

I would like to also have pooled disk caches as well as pooled memory caches. It would be useful if the cache code was an interface to allow this or other custom caches, though I know this has performance implications.

This late in the game the change my be too disruptive to existing code so just close the issue if its unlikely to be accepted.

Extra un-escaping of keys might cause errors

in the http pool, when sending a request to a peer with a key, groupcache does:

u := fmt.Sprintf(
        "%v%v/%v",
        h.baseURL,
        url.QueryEscape(in.GetGroup()),
        url.QueryEscape(in.GetKey()),
    )

    req, err := http.NewRequest("GET", u, nil)

When handling the request (in http pool ServeHTTP), the peer receiving the request does:

key, err := url.QueryUnescape(parts[1])

To match the escaping on the sending side. However, http.ServeMux already un-escapes the path before calling HTTPPool.ServeHTTP. This results in a situation where if the original key had url-escaped characters, the double un-escape might cause errors or misses.

In my case, the key is a URL to be fetched and cached. If the original URL contained %20, by the time the peer's fetch function receives it, it is a simple space inside the URL, resulting in errors.

the ServeMux un-escaping the path is the correct behavior AFAIK, so I'd assume the double un-escaping in groupcache is the problem. Removing it certainly solved the problem for me.

HTTPPool: hash function is not used

In NewHTTPPoolOpts(), HTTPPool.peers is initialized with the given hash function.
In HTTPPool.Set(), the new HTTPPool.peers doesn't use the previous hash function.

GOARCH=386?

I see through a panic you are using int64's, is there a way I could get group cache built easily with a 32 bit architecture?

GET / HTTP/2.0
Host: 0.0.0.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.5
Cache-Control: max-age=0
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:52.0) Gecko/20100101 Firefox/52.0


runtime error: invalid memory address or nil pointer dereference
/usr/local/go/src/runtime/panic.go:489 (0x806c82a)
	gopanic: reflectcall(nil, unsafe.Pointer(d.fn), deferArgs(d), uint32(d.siz), uint32(d.siz))
/usr/local/go/src/runtime/panic.go:63 (0x806b9cc)
	panicmem: panic(memoryError)
/usr/local/go/src/runtime/signal_unix.go:290 (0x807ef46)
	sigpanic: panicmem()
/usr/local/go/src/sync/atomic/asm_386.s:112 (0x80492cc)
	AddUint64: MOVL	0, AX // crash with nil ptr deref
src/github.com/golang/groupcache/groupcache.go:470 (0x838eca1)
src/github.com/golang/groupcache/groupcache.go:207 (0x838dc5c)
src/github.com/DanielRenne/GoCore/core/fileCache/fileCache.go:53 (0x83922d9)
src/github.com/atlonaeng/studio/controllers/appController.go:471 (0x868d668)
src/github.com/atlonaeng/studio/controllers/appController.go:177 (0x868b2d2)
src/github.com/gin-gonic/gin/context.go:97 (0x8427217)
src/github.com/utrack/gin-csrf/csrf.go:94 (0x8436687)
src/github.com/gin-gonic/gin/context.go:97 (0x8427217)
src/github.com/gin-gonic/contrib/sessions/sessions.go:65 (0x84355cc)
src/github.com/gin-gonic/gin/context.go:97 (0x8427217)
src/github.com/gin-gonic/gin/recovery.go:45 (0x8432dba)
src/github.com/gin-gonic/gin/context.go:97 (0x8427217)
src/github.com/gin-gonic/gin/logger.go:63 (0x84322f5)
src/github.com/gin-gonic/gin/context.go:97 (0x8427217)
src/github.com/gin-gonic/gin/gin.go:284 (0x842bc5e)
src/github.com/gin-gonic/gin/gin.go:265 (0x842b727)
/usr/local/go/src/net/http/server.go:2568 (0x8270780)
	serverHandler.ServeHTTP: handler.ServeHTTP(rw, req)
/usr/local/go/src/net/http/server.go:3088 (0x8271d2d)
	initNPNRequest.ServeHTTP: h.h.ServeHTTP(rw, req)
<autogenerated>:312 (0x82901fb)
/usr/local/go/src/net/http/h2_bundle.go:4319 (0x8288e3b)
	(Handler).ServeHTTP-fm: handler := sc.handler.ServeHTTP
/usr/local/go/src/net/http/h2_bundle.go:4599 (0x824f2ad)
	(*http2serverConn).runHandler: handler(rw, req)
/usr/local/go/src/runtime/asm_386.s:1629 (0x8093ca1)
	goexit: BYTE	$0x90	// NOP
�[0m

Best practice for updating a cache entry frequently

My question is a bit similar to issue #3.

I have a map that is currently managed in the RAM of the go application on a single instance. I want to share this map between multiple instances for scaling. I am already using consul for discovery of peer instances and I am currently solving this with redis, however I am not happy with the fact that I am not leveraging each machine's RAM (so in that sense I feel that redis is more a DB than a cache). This is one reason why I love groupcache.

I have a constraint though: my map changes all the time (I'm getting requests to update it via http). So for a key K1 in the map, it is likely that m[K1] will be updated very frequently (possibly every one second or less).

So my questions are:

  1. Am I choosing the wrong architecture? Should I use something like Redis or memecached instead?
  2. If groupcache is a good solution for my use case, do I have to constantly remove and add (say in an LRU cache) or is there a smarter way?

Thanks!

Got a panic with GOOS=linux GOARCH=386

We use groupcache for a file service, it is works well on linux amd64.
But when we compile with GOOS=linux GOARCH=386, we got an panic on runtime.

runtime error: invalid memory address or nil pointer dereference

panic(0x8406140, 0x1880a038)\n\t  
/usr/local/go/src/runtime/panic.go:443 +0x3fd\n  
sync/atomic.AddUint64(0x1880c7ac, 0x1, 0x0, 0x80554be, 0x188b4360)\n\t  
/usr/local/go/src/sync/atomic/asm_386.s:112 +0xc\n  
github.com/golang/groupcache.(*AtomicInt).Add(0x1880c7ac, 0x1, 0x0)\n\t  
.../github.com/golang/groupcache/groupcache.go:470 +0x31\n  
github.com/golang/groupcache.(*Group).Get(0x1880c700, 0x83850c0, 0x188b4300, 0x1880f320, 0x27, 0xb76a68d0, 0x188b4360, 0x0, 0x0)\n\t  
.../github.com/golang/groupcache/groupcache.go:207 +0x82\n  

my go vesion is go1.6.2 darwin/amd64


I found a post :
golang/go#5278

singleflight: add OnceGroup struct

OnceGroup.Do would have the same semantics as Group.Do, but caches and returns the first computed result.

Example: looking up the user id in /etc/passwd for a given username; something you might need to do once and not more than that, since (if) you don't expect the value to change during the life of the process.

Exposed interface Sink not usable

I am currently trying to implement a custom sink but the definition has an unexported method which makes it impossible to implement a custom Sink. Is this intended, if not, will you accept a PR changing it?

My use-case for a custom Sink, currently I am caching 4MB chunks and would like to only read 500 bytes at a time at different offsets. But for each call, groupcache allocates full 4MB when calling cloneBytes(), I would like to allocate right amount to reduce allocs.

cc @bradfitz

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.