Giter VIP home page Giter VIP logo

golang-lru's Introduction

golang-lru

This provides the lru package which implements a fixed-size thread safe LRU cache. It is based on the cache in Groupcache.

Documentation

Full docs are available on Go Packages

LRU cache example

package main

import (
	"fmt"
	"github.com/hashicorp/golang-lru/v2"
)

func main() {
	l, _ := lru.New[int, any](128)
	for i := 0; i < 256; i++ {
		l.Add(i, nil)
	}
	if l.Len() != 128 {
		panic(fmt.Sprintf("bad len: %v", l.Len()))
	}
}

Expirable LRU cache example

package main

import (
	"fmt"
	"time"

	"github.com/hashicorp/golang-lru/v2/expirable"
)

func main() {
	// make cache with 10ms TTL and 5 max keys
	cache := expirable.NewLRU[string, string](5, nil, time.Millisecond*10)


	// set value under key1.
	cache.Add("key1", "val1")

	// get value under key1
	r, ok := cache.Get("key1")

	// check for OK value
	if ok {
		fmt.Printf("value before expiration is found: %v, value: %q\n", ok, r)
	}

	// wait for cache to expire
	time.Sleep(time.Millisecond * 12)

	// get value under key1 after key expiration
	r, ok = cache.Get("key1")
	fmt.Printf("value after expiration is found: %v, value: %q\n", ok, r)

	// set value under key2, would evict old entry because it is already expired.
	cache.Add("key2", "val2")

	fmt.Printf("Cache len: %d\n", cache.Len())
	// Output:
	// value before expiration is found: true, value: "val1"
	// value after expiration is found: false, value: ""
	// Cache len: 1
}

golang-lru's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

golang-lru's Issues

Panic on concurrent writes

As far as I understood this lru is expected to be thread safe, therefor it was unexpected to have this panic

E 2021-03-31T09:37:47.024457593Z      /usr/go-workspace/pkg/mod/github.com/hashicorp/[email protected]/simplelru/lru.go:62 +0x2f1 fp=0xc000969058 sp=0xc000968fe8 pc=0xf81351
 
E 2021-03-31T09:37:47.024449479Z github.com/hashicorp/golang-lru/simplelru.(*LRU).Add(0xc0006a5160, 0x11bd520, 0xc000b8b600, 0x10e29e0, 0xc001558920, 0xc000eca8d1)
 
E 2021-03-31T09:37:47.024441446Z      /usr/local/go/src/runtime/map.go:585 +0x5aa fp=0xc000968fe8 sp=0xc000968f68 pc=0x413a2a
 
E 2021-03-31T09:37:47.024433250Z runtime.mapassign(0x113f2a0, 0xc000e04810, 0xc000969038, 0x1b9abe0)
 
E 2021-03-31T09:37:47.024424707Z      /usr/local/go/src/runtime/panic.go:1117 +0x72 fp=0xc000968f68 sp=0xc000968f38 pc=0x439f12
 
E 2021-03-31T09:37:47.024417160Z runtime.throw(0x12e267b, 0x15)
 
E 2021-03-31T09:37:47.024404395Z goroutine 189587 [running]:
 
E 2021-03-31T09:37:47.024363746Z
 
E 2021-03-31T09:37:47.020499811Z fatal error: concurrent map writes

If any extra info is required please be my guest. The use of lru I have is simple - wrapper around the client with cache. The add to cache is one-liner itself

Would you accept a PR for an interface?

Adding an interface would make it pretty trivial to switch between the three cache types.

type Cache interface {
    Add(key, value interface{})
    Contains(key interface{}) bool
    Get(key interface{}) (interface{}, bool)
    Keys() []interface{}
    Len() int
    Peek(key interface{}) (interface{}, bool)
    Purge()
    Remove(key interface{})
}

New2Q with size = 1 fails to create the cache

This seems to be due to calculating the frequent and recent LRU sizes and being truncated to 0. So when those get created the following error is returned:

"Must provide a positive size"

Obviously this is really edge casey as who would want a cache of size 1 (except I did when implementing some unit tests for Consul).

atomic contains-or-add?

hey,
would a new function that does all this in 1 shot be accepted in a PR?

  • check if value exists in cache, if so return true
  • if not, add the specified value.

I was thinking a signature like:

func (c *Cache) ContainsOrAdd(key, value interface{}) (ok, evicted bool)

onEvict callback for ARC

Hi, I would like to say thanks for this project guys.

Is there any plans to add onEvict callback to ARC cache too? Actually, it's pretty useful when you work with sync.Pool as well.

GitHub Actions - deprecated warnings found - action required!

Workflow Name: build
Branch: master
Run URL: https://github.com/hashicorp/golang-lru/actions/runs/4744177830

save-state deprecation warnings: 0
set-output deprecation warnings: 0
node12 deprecation warnings: 1

Please review these deprecation warnings as soon as possible and merge in the necessary updates.

GitHub will be removing support for these commands and plan to fully disable them on 31st May 2023. At this time, any workflow that still utilizes these commands will fail. See https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/.

GitHub have not finalized a date for deprecating node12 yet but have indicated that this will be summer 2023. So it is advised to switch to node16 asap. See https://github.blog/changelog/2022-09-22-github-actions-all-actions-will-begin-running-on-node16-instead-of-node12/.

If you need any help, please reach out to us in #team-rel-eng.

Why not sync.Map?

Since golang provide sync.map in go 1.9, and its performance is better than simple map + sync.RWLock. So why don't use it in this library?

project is dead?

Hi all -- I'm surprised to see this project drifting into oblivion after so many projects depend on it? ... as an amazing OSS infrastructure pioneer, I'd expect Hashicorp to do a better job maintaining this small and simple lib.

ie.
#97
#98
#105
...

cc: @armon

Update README to consider generics

I don't believe the current example in the README compiles with v2 of the library after the move to generics. It would be great to update this (I am currently trying to update some v1 code).

onEvicted is called with incorrect item when a previous Add overwrites an existing item

When the cache is full and updated with an existing item and a new item is added on top of that, onEvicted callback function is called with incorrect key and value.

See below example:

package main

import (
	"fmt"

	lru "github.com/hashicorp/golang-lru/v2"
)

var cache *lru.Cache[int, struct{}]

func main() {
	var evictedKeys []int

	cache, _ = lru.NewWithEvict(
		2,
		func(key int, _ struct{}) {
			evictedKeys = append(evictedKeys, key)
		})

	cache.Add(1, struct{}{})
	fmt.Printf("cache keys: %v\n", cache.Keys()) // cache keys: [1]

	cache.Add(2, struct{}{})
	fmt.Printf("cache keys: %v\n", cache.Keys()) // cache keys: [1, 2]

	cache.Add(1, struct{}{})
	fmt.Printf("cache keys: %v\n", cache.Keys()) // cache keys: [2, 1]

	cache.Add(3, struct{}{})
	fmt.Printf("cache keys: %v\n", cache.Keys()) // cache keys: [1 3]

	// expecting [2], or even [1 2] would be OK as 1 is overwritten
	fmt.Printf("evicted keys: %v\n", evictedKeys) // evicted keys: [1]
}

The problem is caused in these lines:

https://github.com/hashicorp/golang-lru/blob/master/lru.go#L84
https://github.com/hashicorp/golang-lru/blob/master/lru.go#L133
https://github.com/hashicorp/golang-lru/blob/master/lru.go#L157

These only take the first evicted key and the eviction caused by overwritten key appears as the first. The reason it's there is because the overwritten key is added to evictedKeys and evictedValues lists, but evicted flag is false when it's overwritten, so onEvicted is not called for it when it's actually evicted.

Either the onEvicted method must be called in a loop to consume all the keys and values, or the evicted flag must be true in the case of overwriting existing keys.

panic during adding new element

error message:
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x40 pc=0x808019]

goroutine 10662137 [running]:
container/list.(*List).remove(...)
/usr/lib/golang/src/container/list/list.go:110
container/list.(*List).Remove(...)
/usr/lib/golang/src/container/list/list.go:143
github.com/hashicorp/golang-lru/simplelru.(*LRU).removeElement(0xc0005962a0, 0xc002408000)
/root/zip/majiang/vendor/github.com/hashicorp/golang-lru/simplelru/lru.go:155 +0x39
github.com/hashicorp/golang-lru/simplelru.(*LRU).removeOldest(0xc0005962a0)
/root/zip/majiang/vendor/github.com/hashicorp/golang-lru/simplelru/lru.go:149 +0x4c
github.com/hashicorp/golang-lru/simplelru.(*LRU).Add(0xc0005962a0, 0x11ab0a0, 0xc00b4ac2c0, 0x0, 0x0, 0xc00c49b501)
/root/zip/majiang/vendor/github.com/hashicorp/golang-lru/simplelru/lru.go:67 +0x2c1
github.com/hashicorp/golang-lru.(*ARCCache).replace(0xc0002ec1c0, 0x0)
/root/zip/majiang/vendor/github.com/hashicorp/golang-lru/arc.go:186 +0x96
github.com/hashicorp/golang-lru.(*ARCCache).Add(0xc0002ec1c0, 0x11ab0a0, 0xc00c49b5e0, 0x1380640, 0xc007c432c0)
/root/zip/majiang/vendor/github.com/hashicorp/golang-lru/arc.go:163 +0x5b2

How to use v2 without generics?

I was updating my go dependencies today and after fighting go mod and first not understanding what github.com/hashicorp/golang-lru: module github.com/hashicorp/golang-lru@latest found (v1.0.1), but does not contain package github.com/hashicorp/golang-lru meant, I finally realized that the v1.0.1 tag is unusable and broken. Not ideal for a tag which first looks like the stable release before v2. Then I thought I maybe try to upgrade to v2 only realizing that there is no longer a Cache type that is not using generics. I am not using generics in my code and I was a bit surprised that I would now need to use them only to use a LRU cache. Also I am not yet fluent in generics and there is no migration path I could find in the examples, the doc or in the PR #111 that removed the normal Cache type.

Breaking change

It seems the last commits have broken github.com/hashicorp/go-immutable-radix (and hence hashicorp/raft and everything depending on it):

go get -u github.com/hashicorp/go-immutable-radix
# github.com/hashicorp/go-immutable-radix
/home/free/go/src/github.com/hashicorp/go-immutable-radix/iradix.go:139:14: cannot use lru (type simplelru.LRUCache) as type *simplelru.LRU in assignment: need type assertion

move testing.go to a test only package

This file imports testing which is an import that is disallowed by tooling for production packages. It is only called by test files so we don;t need this in a non-test package.

storing same pointer as value for multiple different keys

I have a key-value pair where key is of type string and value is pointer to a struct. Suppose theres a single struct object.. Can I safely store say three different keys and all their value being pointer to the same single struct.. ?

go map has a memory leak,can help fix it?

go map has a memory leak๏ผŒcan help fix it?
I now use two go map to fix it, if not use this way to avoid the memory leak problem

the reason is that go map delete a key only to label it ,but not really delete it, so the memory can increase always. Using the two map way sets the go map to nil that solves the problem.

`package simplelru

type BetterMap struct {
firstMap map[interface{}]interface{}
secondMap map[interface{}]interface{}
deleteNumThresh int32
totalDeleteNum int32
useMapIndex int32
}

func NewBetterMap(deleteNumThresh int32) *BetterMap {
return &BetterMap{
firstMap: make(map[interface{}]interface{}, 0),
secondMap: make(map[interface{}]interface{}, 0),
deleteNumThresh: deleteNumThresh,
totalDeleteNum: 0,
useMapIndex: 1,
}
}

func (bmap *BetterMap) Set(key interface{}, value interface{}) {
if bmap.useMapIndex == 1 {
bmap.firstMap[key] = value
} else {
bmap.secondMap[key] = value
}

}

func (bmap *BetterMap) GetValue(key interface{}) (interface{}, bool) {
if value, ok := bmap.firstMap[key]; ok {
return value, ok
}

value, ok := bmap.secondMap[key]

return value, ok

}

func (bmap *BetterMap) GetValues() []interface{} {
values := make([]interface{}, 0)
if bmap.useMapIndex == 1 {
for _, value := range bmap.firstMap {
values = append(values, value)
}
} else {
for _, value := range bmap.secondMap {
values = append(values, value)
}
}

return values

}

func (bmap *BetterMap) GetMap() map[interface{}]interface{} {
if bmap.useMapIndex == 1 {
return bmap.firstMap
}

return bmap.secondMap

}

func (bmap *BetterMap) GetLen() int {
if bmap.useMapIndex == 1 {
return len(bmap.firstMap)
}

return len(bmap.secondMap)

}

func (bmap *BetterMap) Purge() {
bmap.firstMap = nil
bmap.secondMap = nil
}

func (bmap *BetterMap) DelKey(key interface{}) {
if bmap.useMapIndex == 1 {
delete(bmap.firstMap, key)
bmap.totalDeleteNum++
if bmap.totalDeleteNum >= bmap.deleteNumThresh {
bmap.secondMap = make(map[interface{}]interface{}, len(bmap.firstMap))
for i, v := range bmap.firstMap {
bmap.secondMap[i] = v
}

		bmap.useMapIndex = 2
		bmap.totalDeleteNum = 0
		bmap.firstMap = nil
	}
} else {
	delete(bmap.secondMap, key)
	bmap.totalDeleteNum++
	if bmap.totalDeleteNum >= bmap.deleteNumThresh {
		bmap.firstMap = make(map[interface{}]interface{}, len(bmap.secondMap))
		for i, v := range bmap.secondMap {
			bmap.firstMap[i] = v
		}

		bmap.useMapIndex = 1
		bmap.totalDeleteNum = 0
		bmap.secondMap = nil
	}
}

} package simplelru

import (
"container/list"
"errors"
)

// EvictCallback is used to get a callback when a cache entry is evicted
type EvictCallback func(key interface{}, value interface{})

// LRU implements a non-thread safe fixed size LRU cache
type LRU struct {
size int
evictList *list.List
//items map[interface{}]*list.Element
betterMap *BetterMap
onEvict EvictCallback
}

// entry is used to hold a value in the evictList
type entry struct {
key interface{}
value interface{}
}

// NewLRU constructs an LRU of the given size
func NewLRU(size int, onEvict EvictCallback) (*LRU, error) {
if size <= 0 {
return nil, errors.New("Must provide a positive size")
}
c := &LRU{
size: size,
evictList: list.New(),
betterMap: NewBetterMap(int32(size)),
onEvict: onEvict,
}
return c, nil
}

// Purge is used to completely clear the cache.
func (c *LRU) Purge() {
c.betterMap.Purge()
c.betterMap = NewBetterMap(int32(c.size))
c.evictList.Init()
}

// Add adds a value to the cache. Returns true if an eviction occurred.
func (c *LRU) Add(key, value interface{}) (evicted bool) {
// Check for existing item
if ent, ok := c.betterMap.GetValue(key); ok {
c.evictList.MoveToFront(ent.(*list.Element))

	ent.(*list.Element).Value.(*entry).value = value
	return false
}
// Add new item
ent := &entry{key, value}
entry := c.evictList.PushFront(ent)
//c.items[key] = entry
c.betterMap.Set(key, entry)
evict := c.evictList.Len() > c.size
// Verify size not exceeded
if evict {
	c.removeOldest()
}
return evict

}

// Get looks up a key's value from the cache.
func (c *LRU) Get(key interface{}) (value interface{}, ok bool) {

if ent, ok := c.betterMap.GetValue(key); ok {
	c.evictList.MoveToFront(ent.(*list.Element))
	return ent.(*list.Element).Value.(*entry).value, true
}
return

}

// Contains checks if a key is in the cache, without updating the recent-ness
// or deleting it for being stale.
func (c *LRU) Contains(key interface{}) (ok bool) {
_, ok = c.betterMap.GetValue(key)
return ok
}

// Peek returns the key value (or undefined if not found) without updating
// the "recently used"-ness of the key.
func (c *LRU) Peek(key interface{}) (value interface{}, ok bool) {

if ent, ok := c.betterMap.GetValue(key); ok {
	return ent.(*list.Element).Value.(*entry).value, true
}
return nil, ok

}

// Remove removes the provided key from the cache, returning if the
// key was contained.
func (c *LRU) Remove(key interface{}) (present bool) {
if ent, ok := c.betterMap.GetValue(key); ok {
c.removeElement(ent.(*list.Element))
return true
}

return false

}

// RemoveOldest removes the oldest item from the cache.
func (c *LRU) RemoveOldest() (key, value interface{}, ok bool) {
ent := c.evictList.Back()
if ent != nil {
c.removeElement(ent)
kv := ent.Value.(*entry)
return kv.key, kv.value, true
}
return nil, nil, false
}

// GetOldest returns the oldest entry
func (c *LRU) GetOldest() (key, value interface{}, ok bool) {
ent := c.evictList.Back()
if ent != nil {
kv := ent.Value.(*entry)
return kv.key, kv.value, true
}
return nil, nil, false
}

// Keys returns a slice of the keys in the cache, from oldest to newest.
func (c *LRU) Keys() []interface{} {
keys := make([]interface{}, c.betterMap.GetLen())
i := 0
for ent := c.evictList.Back(); ent != nil; ent = ent.Prev() {
keys[i] = ent.Value.(*entry).key
i++
}
return keys
}

// Len returns the number of items in the cache.
func (c *LRU) Len() int {
return c.evictList.Len()
}

// Resize changes the cache size.
func (c *LRU) Resize(size int) (evicted int) {
diff := c.Len() - size
if diff < 0 {
diff = 0
}
for i := 0; i < diff; i++ {
c.removeOldest()
}
c.size = size
return diff
}

// removeOldest removes the oldest item from the cache.
func (c *LRU) removeOldest() {
ent := c.evictList.Back()
if ent != nil {
c.removeElement(ent)
}
}

// removeElement is used to remove a given list element from the cache
func (c *LRU) removeElement(e *list.Element) {
c.evictList.Remove(e)
kv := e.Value.(*entry)
c.betterMap.DelKey(kv.key)
if c.onEvict != nil {
c.onEvict(kv.key, kv.value)
}
}`

Why does Contains() method documentation talks about eviction

Going through the documentation, I am bit confused about the wording in there

Contains checks if a key is in the cache, without updating the recent-ness or deleting it for being stale.

AFAIK, eviction in LRU happens only on addition.

If that is correct, why do we even mention eviction while checking for presence with Contains() method.

Is there any LRU that marks elements as stale without deleting them ? And evicts them while checking for presence ?

Tag latest master

Please tag the latest changes of the master branch.
Current latest tag is not thread safe.

Please tag releases

Please consider assigning version numbers and tagging releases. Tags/releases
are useful for downstream package maintainers (in Debian and other distributions) to export source tarballs, automatically track new releases and to declare dependencies between packages. Read more in the Debian Upstream Guide.

Versioning provides additional benefits to encourage vendoring of a particular (e.g. latest stable) release contrary to random unreleased snapshots.

See also:

Suggestion to handle thread safe for LRU Cache

This LRU cache is not a thread safe. So it's better to add a thread safe layer to this.

Code to reproduce the concurrent access.

Code :-

// github.com/hashicorp/golang-lru/simplelru
package main

import (
"log"
"sync"

"github.com/hashicorp/golang-lru/simplelru"

)

var wg sync.WaitGroup

func main() {
LRU, err := simplelru.NewLRU(100, nil)
if err != nil {
log.Fatal(err)
return
}
for i := 0; i < 100; i++ {
LRU.Add(100, 100)
}
wg.Add(2)
go FreqAccess(LRU)
go FreqWrite(LRU)
wg.Wait()
}

func FreqAccess(lru *simplelru.LRU) {
defer wg.Done()
for i := 0; i < 100; i++ {
wg.Add(1)
go func() {
defer wg.Done()
lru.Get(i)
}()
}
}
func FreqWrite(lru *simplelru.LRU) {
defer wg.Done()
for i := 0; i < 100; i++ {
wg.Add(1)
go func() {
defer wg.Done()
lru.Add(i, i)
}()
}
}

Error :-

fatal error: concurrent map read and map write

Size of items map

Hi,

I have a minor comment about a size of map for items. If we have a filled cache and add new unique data entry, it will be saved to the map before delete operation.

It doesn't affect a cache work, but probably would be better if the map size will be omitted during allocation or set to (size+1).

ARC and 2Q caches are not compatible with LRUCache interface

Currently building next code:

package main

import (
	"github.com/hashicorp/golang-lru"
	"github.com/hashicorp/golang-lru/simplelru"
)

func main() {
	var cache simplelru.LRUCache
	cache, _ = lru.NewARC(10)
}

Fails as *lru.ARCCache does not implements parts of simplelru.LRUCache:

  • wrong type for Add method [missing bool return]
  • missing GetOldest method
  • wrong type for Remove method [missing bool return]
  • missing RemoveOldest method
  • missing Resize method

Is it reasonable to fix these mismatches to comply declared interface?

LRU introduces "considerable" overhead

I am in the process of introducing a cache layer to one of my libraries. The goal is to cache compiled regular expressions, on a high level:

c := regex.Regexp{regex.Compile(someregex)

if m.Matches(somehaystack) {
  // ..
}

Because regexp are user-defined, I want to cache them so I don't have to compile them on every request. First, I started out with a very simple approach of storing them to a map (no locks, nothing, very naive approach):

var cache := map[string]*regexp.Regexp

// ...
cache[pattern] = regexp.Compile(pattern)

// ..
if r, ok := cache[p] {
  r.Matches(something)
}

This reduced runtime considerably:

# without cache
BenchmarkLadon/policies=10-8                2000            617938 ns/op
BenchmarkLadon/policies=100-8                300           5677979 ns/op
BenchmarkLadon/policies=1000-8                20          59775053 ns/op

# with primitive cache
BenchmarkLadon/policies=10-8              200000              6022 ns/op
BenchmarkLadon/policies=100-8              30000             43606 ns/op
BenchmarkLadon/policies=1000-8              3000            511438 ns/op

Now, I thought it might be a good idea to use golang-LRU for caching, so memory won't fill up. Unfortunately, this didn't go as expected:

# without cache
BenchmarkLadon/policies=10-8                2000            617938 ns/op
BenchmarkLadon/policies=100-8                300           5677979 ns/op
BenchmarkLadon/policies=1000-8                20          59775053 ns/op

# with cache (map)
BenchmarkLadon/policies=10-8              200000              6022 ns/op
BenchmarkLadon/policies=100-8              30000             43606 ns/op
BenchmarkLadon/policies=1000-8              3000            511438 ns/op

# with cache (replaced map with LRU)
BenchmarkLadon/policies=10-8                2000            637065 ns/op
BenchmarkLadon/policies=100-8                200           6383768 ns/op
BenchmarkLadon/policies=1000-8                20          68409968 ns/op

This really baffled me, because we're using even more time now. First I investigated if I made a mistake in my code, without luck. Then I started writing some benchmarks to check LRU itself:

package lru

import (
	"github.com/hashicorp/golang-lru"
	"regexp"
	"sync"
	"testing"
)

func BenchmarkLadonLRU(b *testing.B) {
	b.StopTimer()
	p, _ := regexp.Compile("^(foo|bar)$")
	c, _ := lru.New(4096)
	b.StartTimer()

	for z := 0; z < b.N; z++ {
		c.Add(z, p)
		o, _ := c.Get(z)
		if r, ok := o.(*regexp.Regexp); ok {
			r.MatchString("foo")
		}
	}
}

func BenchmarkLadonSimple(b *testing.B) {
	var lock sync.RWMutex
	p, _ := regexp.Compile("^(foo|bar)$")
	cache := map[int]interface{}{}
	b.ResetTimer()

	for z := 0; z < b.N; z++ {
		lock.Lock()
		cache[z] = p
		lock.Unlock()
		lock.RLock()
		o := cache[z]
		lock.RUnlock()
		if r, ok := o.(*regexp.Regexp); ok {
			r.MatchString("foo")
		}
	}
}

func BenchmarkLadonNoTypeCast(b *testing.B) {
	var lock sync.RWMutex
	p, _ := regexp.Compile("^(foo|bar)$")
	cache := map[int]*regexp.Regexp{}
	b.ResetTimer()

	for z := 0; z < b.N; z++ {
		lock.Lock()
		cache[z] = p
		lock.Unlock()
		lock.RLock()
		r := cache[z]
		lock.RUnlock()
		r.MatchString("foo")
	}
}

func BenchmarkLadonNoTypeCastNoLockJustYolo(b *testing.B) {
	p, _ := regexp.Compile("^(foo|bar)$")
	cache := map[int]*regexp.Regexp{}
	b.ResetTimer()

	for z := 0; z < b.N; z++ {
		cache[z] = p
		r := cache[z]
		r.MatchString("foo")
	}
}

Which resulted:

BenchmarkLadonLRU-8                              2000000               890 ns/op
BenchmarkLadonSimple-8                           3000000               531 ns/op
BenchmarkLadonNoTypeCast-8                       3000000               497 ns/op
BenchmarkLadonNoTypeCastNoLockJustYolo-8         3000000               446 ns/op

While locking and type casting introduces some overhead, LRU needs about twice the time per op than the most primitive cache. I noticed that increasing the cache's size increases the time spent per operation, too.

I still believe that something else is going on with my code where I replaced the map with LRU, although I think it might be due to some LRU or golang internal when dealing with the interface value (maybe some private fields are eliminated and require re-processing when calling Match?).

I don't know if there is an easy way to improve the runtime of LRU, but maybe you find this helpful. I'll update you if I find something new.

Fail to Add string key of value ^^^[ddd]!!\1\1\1\1`

I added the following to simplelru/lru_test

	k := `^^^[ddd]!!\1\1\1\1`
	a := l.Add(k, k)
	if a {
		t.Fatalf("should have no eviction : %v eviciton :%v", k, a)
	}
	p := l.Remove(k)
	if p {
		t.Fatalf("should be present : %v", p)
	}```


Test run result:
  lru_test.go:76: should be present : true

I would expect `^^^[ddd]!!\1\1\1\1` be able to be added and remove.  Would expect remove to say the item was present before removal?

Add a copyright / notice file

This project doesn't appear to include any copyright information or a NOTICE file. Can you add one? This is desired to comply with the open source license conditions.

would you consider a pr for lock

// Get looks up a key's value from the cache.
func (c *Cache) Get(key interface{}) (value interface{}, ok bool) {
	c.lock.Lock()
	value, ok = c.lru.Get(key)
	c.lock.Unlock()
	return value, ok
}

When I get data from lru, I need to lock the entire lru. Would you consider locking only a few related nodes

Would you consider a PR for cache resize?

I'm using golang-lru in an HTTP microcache.

I'd like to be able to specify my cache's size in bytes rather than length.
I have a pretty good idea of how I want to do it, but it will require resizing the cache length dynamically.
Would you consider a PR adding the following methods?

// Resize changes the cache size
func (c *Cache) Resize(size int) (evicted int) {
	c.lock.Lock()
	evicted = c.lru.Resize(size)
	c.lock.Unlock()
	return evicted
}
// Resize changes the cache size
func (c *LRU) Resize(size int) (evicted int) {
	diff := c.Len() - size
	if diff < 0 {
		diff = 0
	}
	for i:= 0; i < diff; i++ {
		c.removeOldest()
	}
	c.size = size
	return diff
}

removeElement panics with nil interface conversion

Hello, I'm facing intermittent issues with using simplelru under high load. Here is the error:

Apr 22 05:56:59 server1 erigon[4187823]: panic: interface conversion: interface {} is nil, not *simplelru.entry
Apr 22 05:56:59 server1 erigon[4187823]: goroutine 355 [running]:
Apr 22 05:56:59 server1 erigon[4187823]: github.com/hashicorp/golang-lru/simplelru.(*LRU).removeElement(0x16ec740, 0xc0009cb608)
Apr 22 05:56:59 server1 erigon[4187823]: github.com/hashicorp/[email protected]/simplelru/lru.go:172 +0x107
Apr 22 05:56:59 server1 erigon[4187823]: github.com/hashicorp/golang-lru/simplelru.(*LRU).removeOldest(0x15d7080)
Apr 22 05:56:59 server1 erigon[4187823]: github.com/hashicorp/[email protected]/simplelru/lru.go:165 +0x34
Apr 22 05:56:59 server1 erigon[4187823]: github.com/hashicorp/golang-lru/simplelru.(*LRU).Add(0xc0036d5b60, {0x1567300, 0xc01e20ee10}, {0x15ad680, 0x371f460})
Apr 22 05:56:59 server1 erigon[4187823]: github.com/hashicorp/[email protected]/simplelru/lru.go:67 +0x394
Apr 22 05:56:59 server1 erigon[4187823]: github.com/ledgerwatch/erigon-lib/txpool.(*TxPool).discardLocked(0xc000200180, 0xc012f77ef0, 0x4)
Apr 22 05:56:59 server1 erigon[4187823]: github.com/ledgerwatch/[email protected]/txpool/pool.go:1063 +0x15a
Apr 22 05:56:59 server1 erigon[4187823]: github.com/ledgerwatch/erigon-lib/txpool.(*TxPool).addLocked(0xc000200180, 0xc01c355a90)
Apr 22 05:56:59 server1 erigon[4187823]: github.com/ledgerwatch/[email protected]/txpool/pool.go:1038 +0x2c5
Apr 22 05:56:59 server1 erigon[4187823]: github.com/ledgerwatch/erigon-lib/txpool.addTxs(0x105cc63, {0x28a5fe8, 0xc01e015938}, 0xc001c65a70, {{0xc010ff5800, 0x42, 0x80}, {0xc0153eec00, 0x528, 0x600}, ...}, ...)
Apr 22 05:56:59 server1 erigon[4187823]: github.com/ledgerwatch/[email protected]/txpool/pool.go:905 +0x4a4
Apr 22 05:56:59 server1 erigon[4187823]: github.com/ledgerwatch/erigon-lib/txpool.(*TxPool).processRemoteTxs(0xc000200180, {0x28c2d58, 0xc0010eb480})
Apr 22 05:56:59 server1 erigon[4187823]: github.com/ledgerwatch/[email protected]/txpool/pool.go:507 +0x4bc
Apr 22 05:56:59 server1 erigon[4187823]: github.com/ledgerwatch/erigon-lib/txpool.MainLoop({0x28c2d58, 0xc0010eb480}, {0x28cc1a8, 0xc001d92280}, {0x1ad4ccb667b585ab, 0x13748a941b4512d3}, 0xc000200180, 0xc0006ae780, 0xc0003571c0, 0xc0005ba150, ...)
Apr 22 05:56:59 server1 erigon[4187823]: github.com/ledgerwatch/[email protected]/txpool/pool.go:1327 +0x565
Apr 22 05:56:59 server1 erigon[4187823]: created by github.com/ledgerwatch/erigon/eth.New
Apr 22 05:56:59 server1 erigon[4187823]: github.com/ledgerwatch/erigon/eth/backend.go:475 +0x3368

Panic happens at this line I believe:
kv := e.Value.(*entry)

Is it because I'm misusing library in some way?

Add: retrieve evicted element ?

Hi,

i might miss the point, but, i use the lru to store values that are tied to the runtime, so when they are evicted i need to untie them.

Looks like i got to do a bunch of thing on my side to handle my case, things that already exists into the lib.

For example when i do Add, i want to know the evicted item, but can t get it, without manually do peek ect

would it make sense to change Add signature to
func (c *Cache) Add(key, value interface{}, func(store NoTsLRU, evicted bool, value interface{})) bool
?

Unclear terms of license for HashiCorp Projects

IANAL, but according to a MPL v2 FAQ maintained by Mozilla, it looks like this project's (and other Hashicorp projects are) missing key information as to whether the source code is licensed incompatibly (Ex. B) or compatibly (Ex. A) with secondary licenses (such as GPL v3).

Q4: I want to use the Mozilla Public License for software that I have written.
What do I have to do?

To apply the Mozilla Public License to software that you have written, add the
header from Exhibit A of the license to each source code file in your project.
Sample headers for various commenting styles are available here. You may
also add additional accurate notices of copyright ownership, such as the
name of the copyright holder, but this is not necessary.

and

Q22: Does MPL 2.0 require that the MPL 2.0 license notice header be included
in every file?

The license notice must be in some way "attached" to each file. (Sec. 1.4.)
In cases where putting it in the file is impossible or impractical, that requirement
can be fulfilled by putting the notice somewhere that a recipient "would be likely
to look for such a notice," such as a LICENSE file in the same directory as the file.
(Ex. A.) The license notice is as short as possible (3 lines) to make it easy to put
in as many different types of files as possible.

While the license permits putting the header somewhere other than the file itself,
individual files often end up being distributed on their own, without the rest of the
software they were authored with. As a result, putting the license notice in the file
is the surest way to ensure that recipients are always notified.

For your convenience, Mozilla has produced boilerplate headers formatted with
several common 'comment' syntaxes, suitable for copying and pasting.

Related? #72

Unused code in simplelru.LRU.Get()

Hi there!

I was looking closely at the code and found a piece added in #56 which doesn't seem to be used and is not tested:

if ent.Value.(*entry) == nil {
return nil, false
}

I tried but was unable to trigger this condition. I think it should be either covered with tests and added to all other affected functions (e.g. at least LRU.Peek) or removed altogether if that condition is indeed impossible as I think it is.

question about promotion policy in 2Q-LRU

Hi, it seems that implementation of 2Q-LRU in golang-lru has slight difference with the original paper in promotion policy .

In the paper, the action of accessing a entry in recent-list is to do nothing. Since this access may simply be a correlated reference. (from 2Q, Full Version part)
2q-lru-access

But in golang-lru, when accessing a entry in the recent list again, it will be promoted to the mru-end of the frequent list.

func (c *TwoQueueCache) Get(key interface{}) (value interface{}, ok bool) {
    // ...

    // If the value is contained in recent, then we
   // promote it to frequent
   if val, ok := c.recent.Peek(key); ok {  
            c.recent.Remove(key)
            c.frequent.Add(key, val)
            return val, ok
    }

Could share your consideration about this promotion policy? @armon

I think this is answer is going to be "when a entry being accessed again, it's truly in most-recent-used status.", but I figured I would ask just in case I'm wrong.

Thanks in advance.

Does not build go 1.11

Hi,

Your go.mod file prevents golang-lru from building on go 1.11. Is 1.12 a strict requirement of the library? If not, can the go.mod file be modified to allow other go versions at least to go 1.11?

Thanks!

can you tag v0.5.5?

hi there, tag v0.5.4 is quite outdated, can you please tag v0.5.5 or v0.6.0?

have Add return a bool if an eviction occurred

I googled the "golang lrucache" and found my way here. Thanks for making this public!

I'd like to instrument my LRU cache and see how many insert / removes / evictions are happening. This way I can see if my cache is correctly sized or not. I can wrap / facade the object to get insert/removes (or just keep track of it explicitly by the caller).

But I can not tell when an eviction happens. I saw a pull request flying around for an onEviction event. I can make that work, but it's clumsy since I need to make a closure or something to make the onEvent function know about the parent object to increment the right counter.

My proposal is just have Add return bool if an eviction happens. This way I can tell directly, no call back needed, every operation on the cache.

I'm happy to do the patch, but wanted to check with group here to see if anyone has better ideas (and is there a better place to have this discussion?)

thanks,

nickg

MPL License "Incompatible With Secondary Licenses"

Hi there!

I would like to know whether the inclusion of Exhibit B of the MPL 2.0 in the license for this library was intentional, or merely just a mistake.

The fact that this license is "Incompatible With Secondary Licenses" essentially means that the software may not be used to create a "Larger Work" licensed under the (L)GPL.

The (L)GPL are still some of the most popular OSS licenses, so I'm quite surprised to have found this notice in this library's license.

Mozilla's FAQ mentions this incompatibility between the licenses when Exhibit B has been included: https:///www.mozilla.org/en-US/MPL/2.0/FAQ/#mpl-and-lgpl

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.