Giter VIP home page Giter VIP logo

concurrentlinkedhashmap's Introduction

ConcurrentLinkedHashMap
========================

ConcurrentLinkedHashMap is a concurrent, bounded map for use as an in-memory
cache. It is the default caching library in Apache Cassandra and is the
reference algorithm for Google Guava CacheBuilder (formally MapMaker).

Prior to Building
--------------------
The following libraries must be installed into the local repository. 

$ mvn install:install-file \
    -Dfile=lib/cache-benchmark/benchmark-fwk.jar \
    -DpomFile=lib/cache-benchmark/pom.xml

Compiling
--------------------
$ mvn clean compile

Analysis
--------------------
Static analysis and test coverage reports can be viewed by generating the site.
$ mvn site

Testing
--------------------
The unit test and benchmark parameter options can be configured in
src/test/resources/testng.yaml

The default test suite is used for development.
$ mvn test (or mvn -P development test)

The load test suite is used to detect problems that only appear after a long
execution, such as memory leaks.
$ mvn -P load test

A test or test method can be executed selectively, such as
$ mvn -Dtest=ConcurrentMapTest#get_whenNotFound -Dcapacity=100 test

Benchmarks
--------------------
At this time, this library does not supply a canonical benchmark. It leverages
existing benchmarking tools available externally.

This benchmarks the single-threaded performance.
$ mvn -P caliper test

This benchmarks the multi-threaded performance with an expected bound.
$ mvn -P cachebench test

This benchmarks the multi-threaded performance without an expected bound.
$ mvn -P perfHash test

This benchmarks the eviction algorithm efficiencies based on a scrambled zipfian
working set.
$ mvn -P efficiency test

concurrentlinkedhashmap's People

Contributors

ben-manes avatar

Watchers

James Cloos avatar  avatar

concurrentlinkedhashmap's Issues

Ability to cache collections and evict when sum(lengths) > capacity.

Requested by Matt Passell ([email protected]) over private 
email.
------------

Current Behavior:
Each element in the map consumes one slot of capacity.

Desired Behavior:
Ability to insert collections, where the length determines how many slots 
it consumes.

Complexity / Amount of Change Required:
Allow pluggable evaluators to determine the size of the entry. One could 
imagine using memory usage as the capacity bound and using the Java's 
agent instrumentation to evaluate the entry's memory size.

Original issue reported on code.google.com by [email protected] on 28 Feb 2009 at 4:14

ConcurrentLinkedMultimap?

Have you considered creating a version of this data structure which allows the 
association of multiple entries per key, with separate expiry (FIFO, LRU) per 
entry? 

I have a very strong use case for such a structure, but I don't see anything 
out 
there that would satisfy it (map to set/list is not the same, as then the whole 
entry shares the same expiry.)

Original issue reported on code.google.com by ryan.daum on 9 Dec 2009 at 6:35

FindBugs reports serialization issues

What steps will reproduce the problem?
1. ant run-findbugs
2. open the HTML file
3.

What is the expected output? What do you see instead?


Please use labels and text to provide additional information.

May need to re-think whether it is useful to make the map serializable.

Se  Class ConcurrentLinkedHashMap defines non-transient non-serializable
instance field sentinel
Se  Class ConcurrentLinkedHashMap defines non-transient non-serializable
instance field data
Se  Class ConcurrentLinkedHashMap defines non-transient non-serializable
instance field listenerQueue
Se  Class ConcurrentLinkedHashMap defines non-transient non-serializable
instance field reorderQueue
Se  Class ConcurrentLinkedHashMap defines non-transient non-serializable
instance field weigher
Se  Class ConcurrentLinkedHashMap defines non-transient non-serializable
instance field writeQueue
Se  Class ConcurrentLinkedHashMap$SerializationProxy defines non-transient
non-serializable instance field data
Se  Class ConcurrentLinkedHashMap$SerializationProxy defines non-transient
non-serializable instance field weigher
Se  ConcurrentLinkedHashMap$WriteThroughEntry is serializable and an inner
class
SnVI    ConcurrentLinkedHashMap$WriteThroughEntry is Serializable; consider
declaring a serialVersionUID

Se: This Serializable class defines a non-primitive instance field which is
neither transient, Serializable, or java.lang.Object, and does not appear
to implement the Externalizable interface or the readObject() and
writeObject() methods.  Objects of this class will not be deserialized
correctly if a non-Serializable object is stored in this field.

Se inner class: This Serializable class is an inner class. Any attempt to
serialize it will also serialize the associated outer instance. The outer
instance is serializable, so this won't fail, but it might serialize a lot
more data than intended. If possible, making the inner class a static inner
class (also known as a nested class) should solve the problem.

Original issue reported on code.google.com by [email protected] on 11 Apr 2010 at 10:41

Attachments:

Remove catch-up executor (failed experiment)

The catchup executor is actually never used in the code except for one place: 
shouldDrainBuffers() checks if the executor is shut-down (which is true for the 
default executor). If not - no eviction is ever carried out.

Original issue reported on code.google.com by [email protected] on 27 Feb 2012 at 11:52

Race condition with ConcurrentLinkedHashMap

We are seeing a race condition with ConcurrentLinkedHashMap using
appendToTail. We are using this library in Cassandra and came across this
problem from trunk.

The open ticket in question is at (full stack trace):

https://issues.apache.org/jira/browse/CASSANDRA-405

Stack trace:

pool-1-thread-{63,62,61,59,58,54,53,51,49,47} and
ROW-READ-STAGE{8,7,5,4,3,2,1}:
"ROW-READ-STAGE:8" prio=10 tid=0x00007f1b78b52000 nid=0x1945 runnable
[0x0000000046532000]
   java.lang.Thread.State: RUNNABLE
at
com.reardencommerce.kernel.collections.shared.evictable.ConcurrentLinkedHashMap$
Node.appendToTail(ConcurrentLinkedHashMap.java:536)
at
com.reardencommerce.kernel.collections.shared.evictable.ConcurrentLinkedHashMap.
putIfAbsent(ConcurrentLinkedHashMap.java:281)
at
com.reardencommerce.kernel.collections.shared.evictable.ConcurrentLinkedHashMap.
put(ConcurrentLinkedHashMap.java:256)
at org.apache.cassandra.io.SSTableReader.getPosition(SSTableReader.java:241)
at org.apache.cassandra.db.filter.SSTableNamesIterator.
(SSTableNamesIterator.java:46)
at
org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQu
eryFilter.java:69)
at
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java
:1445)
at
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java
:1379)
at
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java
:1398)
at
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java
:1379)
at org.apache.cassandra.db.Table.getRow(Table.java:589)
at
org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65
)
at org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:78)
at
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:44)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:8
86)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)

ROW-READ-STAGE6 is a little different:
"ROW-READ-STAGE:6" prio=10 tid=0x00007f1b78b4e000 nid=0x1943 runnable
[0x0000000046330000]
   java.lang.Thread.State: RUNNABLE
at
com.reardencommerce.kernel.collections.shared.evictable.ConcurrentLinkedHashMap$
Node.appendToTail(ConcurrentLinkedHashMap.java:540)
at
com.reardencommerce.kernel.collections.shared.evictable.ConcurrentLinkedHashMap.
putIfAbsent(ConcurrentLinkedHashMap.java:281)
at
com.reardencommerce.kernel.collections.shared.evictable.ConcurrentLinkedHashMap.
put(ConcurrentLinkedHashMap.java:256)
at org.apache.cassandra.io.SSTableReader.getPosition(SSTableReader.java:241)
at org.apache.cassandra.db.filter.SSTableNamesIterator.
(SSTableNamesIterator.java:46)
at
org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQu
eryFilter.java:69)
at
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java
:1445)
at
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java
:1379)
at
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java
:1398)
at
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java
:1379)
at org.apache.cassandra.db.Table.getRow(Table.java:589)
at
org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65
)
at org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:78)
at
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:44)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:8
86)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619) 

Original issue reported on code.google.com by [email protected] on 1 Sep 2009 at 5:23

change weight references from int to long to support map capacity based on byte size that is > 2GB

Current Behavior:
integer weight when used for maps that have their capacity based on byte size 
mean that the map is limited to 2gb.

Desired Behavior:
change all references to weight from and int to a long

Complexity / Amount of Change Required:
took me about 5 minuites in eclipse to change all the references and method 
return types in all the required methods of the concurrentlinkedhashmap 
,weighters and weighther  classes.

Original issue reported on code.google.com by [email protected] on 20 Apr 2011 at 1:30

gson deserialize fail

i use the concurrentlinkedhashmap to store data and serialize to String on 
disk。

when deserialize the string with gson, the exception cause:

01-17 17:34:35.647: W/System.err(8870): java.lang.IllegalArgumentException: 
invalid value for field
01-17 17:34:35.647: W/System.err(8870):     at 
java.lang.reflect.Field.setField(Native Method)
01-17 17:34:35.647: W/System.err(8870):     at 
java.lang.reflect.Field.set(Field.java:588)
01-17 17:34:35.647: W/System.err(8870):     at 
com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.read(ReflectiveType
AdapterFactory.java:95)
01-17 17:34:35.647: W/System.err(8870):     at 
com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.read(Reflecti
veTypeAdapterFactory.java:172)
01-17 17:34:35.647: W/System.err(8870):     at 
com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.read(ReflectiveType
AdapterFactory.java:93)
01-17 17:34:35.647: W/System.err(8870):     at 
com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.read(Reflecti
veTypeAdapterFactory.java:172)


the gson(newest) treat the data(deserialize) as 
“com.google.gson.internal.LinkedTreeMap” not the concurrentlinkedhashmap,
maybe the reson is that concurrentlinkedhashmap has no construct func, or else?


the class i want to serialize/deserialize :

public class MediaMapping {
    public MediaMapping(){
        map = new ConcurrentLinkedHashMap.Builder<String, MediaValue>()
                .maximumWeightedCapacity(Constant.MEDIA_CACHE_LIMIT) // 1GB, internal storage, not memory
                .weigher(memoryUsageWeigher)
                .listener(listener)
                .build();
         }
  ...
}

Original issue reported on code.google.com by [email protected] on 17 Jan 2014 at 10:07

Cache capacity limitation of being an Integer

What steps will reproduce the problem?
1. use ByteArrayWeigher
2. set the capacity to > 2 * 1024 * 1024 * 1024
3.

What is the expected output? What do you see instead?
It should work as expected as we have available memory.

What version of the product are you using? On what operating system?
1.2

Please provide any additional information below.
following exception is thrown. 
Caused by: java.lang.IllegalArgumentException
    at com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Builder.initialCapacity(ConcurrentLinkedHashMap.java:1647)
    at org.apache.cassandra.cache.SerializingCache.<init>(SerializingCache.java:63)
    at org.apache.cassandra.cache.SerializingCacheProvider.create(SerializingCacheProvider.java:33)
    at org.apache.cassandra.service.CacheService.initRowCache(CacheService.java:129)
    at org.apache.cassandra.service.CacheService.<init>(CacheService.java:88)
    at org.apache.cassandra.service.CacheService.<clinit>(CacheService.java:63)

the fix may be is to make the capacity variable to long insted of int.

Original issue reported on code.google.com by [email protected] on 13 Apr 2012 at 8:29

Use ConcurrentSkipListMap if size >1.3M

Current Behavior:
The FAQ describes why a hash table approach is attractive at average use-
case sizes, but performs poorly at extremely large use-cases. In the 
Ehcache forum, a user had performance problems with a cache size of 60M 
entries. This would result in 915 comparisions on the segment's list.

Desired Behavior:
If the data store was a instead skip-list based then at 60M entries it 
would require 26 comparisions. The cross-over is at 1.33M entries when a 
skip-list provides better performance characteristics. 

Complexity / Amount of Change Required:
The adoption of ConcurrentSkipListMap would require that clients have 
adopted Java-6. The current expectation is Java-5.

Alternatively, the data store could be specified either by a constructor 
argument or retrieved from a protected delegate method. While both 
approaches would work, they would leak implementation details. The class 
name shouldn't be "ConcurrentLinkedHashMap" any longer, as hashing may not 
be the strategy.

Another approach is to migrate to a package-private 
"AbstractConcurrentLinkedMap" and provide a "ConcurrentLinkedHashMap" and 
"ConcurrentLinkedSkipListMap" implementations. This hides internal details 
and allows exposure of the NavigatableMap methods. Unfortunately this 
would require conditional compilation to build a Java-5 compatible jar 
that lacked the second data structure.

Original issue reported on code.google.com by [email protected] on 26 May 2009 at 9:12

Concurrent LIRS cache implementation

I have implemented a concurrent LIRS cache. It is actually only an 
approximation by default, as the cache is segmented (each segment is 
synchronized - same as the concurrent hash map), and because re-ordering 
(move-to-front) only occurs if a number of other entries have been moved to the 
front first. Both parameters are configurable, so it is possible to tune it for 
speed (which might somewhat affect hit ratio) or maximum hit ratio.

On my machine, I get:

mvn -P efficiency test
LIRS (existing LIRS impl):
hits=103,580 (41% percent), misses=146,420 (59% percent)
hits=214,571 (43% percent), misses=285,429 (57% percent)
hits=329,788 (44% percent), misses=420,212 (56% percent)
LIRS_APRROX (my impl):
hits=106,534 (43% percent), misses=143,466 (57% percent)
hits=224,610 (45% percent), misses=275,390 (55% percent)
hits=341,993 (46% percent), misses=408,007 (54% percent)

mvn -P perfHash test
Avg with 10 threads: 
165053151 for CHM_16
176652094 for NBHashMap
008928183 for LHM
001028974 for CLHM_16 
046569843 for CLIRS_16 (my impl)
(Stddev is less than 1% for all implementations except for CLHM).

(I also ran "mvn -P cachebench test" and got good numbers, but I had to change 
the size to avoid out-of-memory).

The implementation is somewhat tested, but not completely - tests are available 
at http://code.google.com/p/h2database/

Original issue reported on code.google.com by [email protected] on 16 Oct 2012 at 9:47

Attachments:

API to iterate keys in order of hotness

As described in https://issues.apache.org/jira/browse/CASSANDRA-1966, it would 
be useful for Cassandra to be able to traverse the map in order of hotness.  
(Increasing hotness preferred, but decreasing is also workable.)

On that page, Ben said "these would be more expensive as they would require 
traversing the LRU chain to perform a copy and be a blocking operation, but 
would not affect read/write operations."

I'm a little unclear as to how an operation would be both blocking but not 
affecting reads/writes; can you elaborate?

Original issue reported on code.google.com by [email protected] on 12 Jan 2011 at 6:32

nullListener is not Serializable

What steps will reproduce the problem?
1. create a ConcurrentLinkedHashMap without a listener
2. serialize it

What is the expected output? What do you see instead?

expect successful serialization.
get NotSerializableException on ConcurrentLinkedHashMap$1

What version of the product are you using? On what operating system?

using version from
http://code.google.com/p/concurrentlinkedhashmap/wiki/ProductionVersion
on linux.  

Examining the svn head source shows it would also be an issue there if the
entire map hadn't be made not Serializable.  (pity)

Workaround: 
    oos.writeObject(new HashMap(clhm));

Original issue reported on code.google.com by [email protected] on 31 Mar 2010 at 1:01

concurrentlinkedhashmap don't include the license file

Not available LICENSE and NOTICE file in source directory structure
Please. Added license and copyright notice.
the fedora pakaging guideline is very strictly precise about this problem
https://fedoraproject.org/wiki/Packaging:LicensingGuidelines?rd=Packaging/Licens
ingGuidelines#License_Text
thanks
regards

Original issue reported on code.google.com by [email protected] on 23 May 2013 at 9:37

where is the cache stats

google guava Cache has a  cache.stats() method  which
 give some cache  statistics,concurrentlinkedhasmap has similar method?

Original issue reported on code.google.com by [email protected] on 30 Apr 2014 at 5:20

Could ConcurrentSkipListMap be used as backing map?

Hi - 

I was looking into your cache implementation to check if I could use a 
ConcurrentSkipListMap because I need to be able to seek into the cache with 
ceilingKey and floorKey methods.

Keys have two components and would look like:

key.1
key.2
key.4

Searching would look like this 

ceilingKey(key) -> key.1
ceilingKey(key.1) -> key.1
ceilingKey(key.3) -> key.4

floorKey(key.magicendmarker) -> key.4

I would also need to expose tailKeyItertor(fromKey) and headKeyIterator(toKey)

Value access would always be performed via get(key)

This is not really a feature request but an inquiry if you see any problems 
doing that. From my understanding of the code it should work.

Thanks,
Daniel



Original issue reported on code.google.com by [email protected] on 15 Jul 2011 at 5:32

Provide getQuiet() method?

Current Behavior:

  The existing code exposes a get() call which updates alters the eviction characteristics for any cache entry it returns. 

Desired Behavior:

  I'd like to see an additional method called something like getQuiet() which returns the cache entry but does not interact with the eviction policy at all.  It's used for "peeking" into the map.  Caches like Ehcache and JCS have getQuiet() methods, which are used by reporting and memory management tools.

My use case right now is that I'd like to replace the JCS default memory cache 
(which has awful concurrent performance; it locks the whole cache during gets 
and puts) with an implementation backed by ConcurrentLinkedHashMap.  I cannot 
currently implement the getQuiet() method of the JCS MemoryCache interface 
(well, not properly).

Complexity / Amount of Change Required:

  It's a small change.  Something like, for example:

  public V getQuiet(Object key) {
      final Node node = data.get(key);
      if (node==null) {
          return null;
      }
      return node.getValue();
  }

  I know I could make this change myself to a fork of the code but before I do that I'm wondering if this is something you'd be interested in providing.  

Any thoughts?  Thanks!

Yours,
Joseph Calzaretta



Original issue reported on code.google.com by [email protected] on 9 Jul 2012 at 6:14

Make capacity and weightedSize long

I'm using CLHM for a cache, with weights denoting the sizes (in bytes) of the 
respective elements. Problem is, caches nowadays can easily surpass the 2GB 
limit of a positive int.

A workaround would be to shave a few bits off the element size for the weight, 
simply making the cache a little too eager to evict (as element sizes would be 
essentially rounded up), but making capacity and weightedSize long is a minor 
change.

Original issue reported on code.google.com by [email protected] on 23 Feb 2012 at 2:42

add an option to pass a weight to put and replace methods for weights based on the byte size of a value

Current Behavior:
weights computed via a weighter object

Desired Behavior:
keep current behavior but add an option to pass a weight to put and replace 
methods for weights based on the byte size of the value. This helps performance 
as low level DBes like BerkeleyDB return byte arrays anyway so just passing the 
length field of the existing byte array to the put and replace methods is 
better than having a weighter object that has to use cpu cycles to calculate 
the size (via the creation of another byte array or by other methods).

Complexity / Amount of Change Required:
this sort of thing should do the job>

 @Override
  public V put(K key, V value) {
    return put(key, value, false,-1);
  }

  @Override
  public V putIfAbsent(K key, V value) {
    return put(key, value, true,-1);
  }

  V put(K key, V value, boolean onlyIfAbsent) {
     return  put(key,value,onlyIfAbsent,-1);
  }

  /**
   * Adds a node to the list and the data store. If an existing node is found,
   * then its value is updated if allowed.
   *
   * @param key key with which the specified value is to be associated
   * @param value value to be associated with the specified key
   * @param onlyIfAbsent a write is performed only if the key is not already
   *     associated with a value
   * @return the prior value in the data store or null if no mapping was found
   */
  V put(K key, V value, boolean onlyIfAbsent, long theWeight) {
    checkNotNull(key, "null key");
    checkNotNull(value, "null value");

//changed from int to long to support max capacity based on size that is not 
limited to 2GB
    final long weight = theWeight < 1 ? weigher.weightOf(value) : theWeight;
    final WeightedValue<V> weightedValue = new WeightedValue<V>(value, weight);
    final Node node = new Node(key, weightedValue);

//and same for replace methods

Original issue reported on code.google.com by [email protected] on 20 Apr 2011 at 1:24

Make ExpirableMap decorator available

I can see quite a few people wanting to use the expiration policies presented 
in the wiki along with the ConcurrentLinkedHashMap implementation.
It'd be excellent if their were available within the library artifact.

Original issue reported on code.google.com by [email protected] on 3 Feb 2012 at 11:42

Support a pluggable evaluator for capacity handling

Current Behavior:

Capacity handling currently is not pluggable and thus cannot be configured to 
support different schemes for determining when eviction should occur.

Desired Behavior:

Allowing for pluggable capacity handling would allow the CLHM to be used OOTB 
with an improvement in the works in the voldemort project - that being caching 
based on used memory, not predetermined 
(http://code.google.com/p/project-voldemort/issues/detail?id=225)

Complexity / Amount of Change Required:

Unknown, but I suspect it wouldn't be a huge change to implement.

Original issue reported on code.google.com by [email protected] on 14 Sep 2010 at 10:15

weights as long

Currently weights are ints instead of longs.  This causes problems if objects 
have large weights (such as in bytes).  A simple work around is to divide the 
byte-weight by some constant, but that breaks down for small values.  Is there 
an advantage to weights being ints?

Original issue reported on code.google.com by [email protected] on 26 Oct 2011 at 4:28

Maven Central Repository

Issue:

Could you guys upload the project to Central Maven, Apache, Codehaus or 
Sonatype repositories?
I'm worried with the java.net maven repository future after the merge with 
Kenai Project.

Original issue reported on code.google.com by [email protected] on 2 Jun 2010 at 7:30

UnsupportedClassVersionError on JRE1.5

What steps will reproduce the problem?
1. When you run applications on JRE1.5
2.
3.

What is the expected output? What do you see instead?
Should not throw error

What version of the product are you using? On what operating system?
1.0 RC

Please provide any additional information below.

clhm is a dependency in my code, when I run it on JDK 1.5, it throws
UnsupportedClassVersionError: Bad version number in .class file

Backported code would help me use it on my project.

Thanks, Anand 

Original issue reported on code.google.com by [email protected] on 14 Apr 2010 at 10:29

deploy concurrentlinkedhashmap to public Maven repository

Current Behavior:
Download and "install" jar and manually.

Desired Behavior:
Add Maven depdendency to pom.xml and begin using this excellent
implementation.  This will make this library (and any of its updates)
trivially accessible which will very likely increase its use.

Complexity / Amount of Change Required:
Some complexity, but minimal code (pom.xml).  The more willingness to adopt
the "Maven way", the easier this is.

Original issue reported on code.google.com by [email protected] on 17 Feb 2009 at 12:54

Re-evaluate weight api (nice defaults; zero weights)

A Weigher allows values to consume more than one slot in the cache, so that 
users have additional flexibility than only bounding by the number of key-value 
pairs. This allows multi-map or memory-bound caches, for example.

This was implemented, but I left the semantics around collections unresolved as 
it wasn't clear what the expected behavior should be if empty. This may be used 
for negative caching to avoid lookups when the value is known to not exist. If 
a zero weight is allowed, is there a concern of the cache growing unbounded due 
to too much negative caching and no eviction being triggered? Ideally 
soft-references would always be used as a fall-back if memory becomes tight, so 
zero weights wouldn't cause an issue.

This was left unresolved since I wasn't sure what was least surprising and 
expected user feedback to drive a resolution. The choices seem to be:
 * Allow weighted value of zero
 * Use Math.max(list.size(), 1) in Weighers
 * Use list.size() + 1 in Weighers

I'm leaning towards allowing zero.

---
Issue raised by Grails: http://jira.codehaus.org/browse/GRAILS-6622

Original issue reported on code.google.com by [email protected] on 2 Sep 2010 at 6:07

Race condition when updating a value's weight

> What steps will reproduce the problem?
1. Use non-singleton weigher (values > 1)
2. Change a value's weight, re-put.
3. Create enough load to simultaneously evict the value in (2).

> What is the expected output? What do you see instead?
A NPE is thrown during the eviction due to the map overflowing its capacity, 
but with no more nodes in the LRU chain to evict.

> Cause:
The problem is due to a stale volatile read of the node's weight in #evict(). 
While the entry is no longer in the data map (CHM), it may still be mutated in 
#put() using the decorator's segment lock.

> Solution:
Update #evict() to read the weight under the segment lock, thereby forcing any 
mutations to have completed and that the node is not referenced.

Lock lock = segmentLock[segmentFor(key)];
lock.lock();
try {
  weightedSize -= node.weightedValue.weight;
} finally {
lock.unlock();

Reference:
See discussion thread at:
http://groups.google.com/group/concurrentlinkedhashmap/browse_thread/thread/5b4b
3df843efd138

Original issue reported on code.google.com by [email protected] on 21 Oct 2010 at 8:33

Checkstyles issues

Email from Greg Luck (ehcache):
-------------------------------

I get the following checkstyle issues. It will be a lot easier to stay in 
sync with you if we can fix these. 

The other thing this is showing up is that the standard JavaDoc and Sun 
naming conventions are not being followed.

e.g. private static final AtomicReferenceFieldUpdater<Node, Node> 
prevUpdater  would normally be PREV_UPDATER as it is a constant.

/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:77: Type Javadoc comment is missing an @param 
<K> tag.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:77: Type Javadoc comment is missing an @param 
<V> tag.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:79:5: Missing a Javadoc comment.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:79:40: Variable 'listeners' must be private 
and have accessor methods.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:80:5: Missing a Javadoc comment.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:80:40: Variable 'data' must be private and 
have accessor methods.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:81:5: Missing a Javadoc comment.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:81:25: Variable 'capacity' must be private 
and have accessor methods.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:82:5: Missing a Javadoc comment.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:82:26: Variable 'policy' must be private and 
have accessor methods.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:83:5: Missing a Javadoc comment.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:83:25: Variable 'length' must be private and 
have accessor methods.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:84:5: Missing a Javadoc comment.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:84:22: Variable 'head' must be private and 
have accessor methods.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:85:5: Missing a Javadoc comment.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:85:22: Variable 'tail' must be private and 
have accessor methods.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:96:39: '16' is a magic number.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:113:75: '0.75f' is a magic number.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:178: Don't use trailing comments.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:251: Don't use trailing comments.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:411:13: Missing a Javadoc comment.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:414:13: Missing a Javadoc comment.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:423:13: Missing a Javadoc comment.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:426:13: Missing a Javadoc comment.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:430:13: Missing a Javadoc comment.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:443:13: Missing a Javadoc comment.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:452:13: Missing a Javadoc comment.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:484:72: Name 'valueUpdater' must match 
pattern '^[A-Z][A-Z0-9]*(_[A-Z0-9]+)*$'.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:486:71: Name 'stateUpdater' must match 
pattern '^[A-Z][A-Z0-9]*(_[A-Z0-9]+)*$'.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:488:70: Name 'prevUpdater' must match pattern 
'^[A-Z][A-Z0-9]*(_[A-Z0-9]+)*$'.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:490:70: Name 'nextUpdater' must match pattern 
'^[A-Z][A-Z0-9]*(_[A-Z0-9]+)*$'.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:493: Missing a Javadoc comment.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:494:13: Missing a Javadoc comment.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:494:23: Missing a Javadoc comment.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:494:33: Missing a Javadoc comment.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:494:44: Missing a Javadoc comment.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:494:53: Missing a Javadoc comment.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:521:9: Missing a Javadoc comment.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:525:9: Missing a Javadoc comment.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:528:9: Missing a Javadoc comment.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:531:9: Missing a Javadoc comment.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:534:9: Missing a Javadoc comment.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:538:9: Missing a Javadoc comment.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:541:9: Missing a Javadoc comment.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:544:9: Missing a Javadoc comment.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:548:9: Missing a Javadoc comment.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:551:9: Missing a Javadoc comment.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:554:9: Missing a Javadoc comment.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:558:9: Missing a Javadoc comment.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:561:9: Missing a Javadoc comment.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:565:9: Missing a Javadoc comment.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:568:9: Missing a Javadoc comment.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:571:9: Missing a Javadoc comment.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:585:15: 'value' hides a field.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:599:9: Missing a Javadoc comment.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:608:69: ',' is not followed by whitespace.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:718:46: ',' is not followed by whitespace.
/Users/gluck/work/ehcache/core/src/main/java/net/sf/ehcache/concurrent/
ConcurrentLinkedHashMap.java:718:68: ',' is not followed by whitespace.



Original issue reported on code.google.com by [email protected] on 26 Feb 2009 at 1:53

create CLHM classes for each primitve type for the keys of the map to eleminate autoboxing/unboxing

Current Behavior:
autoboxing is used when primitive values are used as keys which reduces 
performance.

Desired Behavior:
create CLHM classes for each primitve type for the keys of the map to eleminate 
autoboxing/unboxing.  
eg.

byteConcurrentlinkedhashmap
shortConcurrentlinkedhashmap
intConcurrentlinkedhashmap
longConcurrentlinkedhashmap

Complexity / Amount of Change Required:
public V put(K key, V value) becomes public V put(long key, V value) 
for longConcurrentlinkedhashmap etc.

lots of methods would need to be updated but it should not take too long.

See the Trove collections to see how they do it.

Original issue reported on code.google.com by [email protected] on 20 Apr 2011 at 1:40

Cannot start as OSGi bundle

Imported as bundle into apache Karaf (Karaf version 2.3.3,  OSGi Framework 
org.eclipse.osgi - 3.8.0.v20120529-1548) cannot start a bundle with exception:

com.googlecode.concurrentlinkedhashmap.lru_1.4.0 [234]" could not be resolved. 
Reason: Missing Constraint: Import-Package: sun.misc; version="0.0.0"

proposed solution: import directive to exclude com.sun import (it is done 
implicitly via JVM runtime).

Original issue reported on code.google.com by [email protected] on 12 May 2014 at 1:16

  • Merged into: #40

Performance & memory optimizations

Version 1.0 focused on providing a concurrent architecture, but did not receive 
a deep performance analysis. A performance release should be provided to fine 
tune the data structure. A regression test [1] indicated that increased memory 
usage is the primary concern. Below lists optimization ideas (please suggest 
others).

[1] https://issues.apache.org/jira/browse/CASSANDRA-975

-----
Drain all buffers when the LRU lock is acquired.
 + avoid artificial failure (coerced in load test)
 + increased hit rate
 + reduced memory
 + fewer drains
 - more work per drain

Currently only a segment's recency buffer is drained when it exceeds the 
threshold or a write occurs. Instead all buffers should be drained. The 
amortized cost should remain small, with an increased penalty for writers who 
proactively drain.

-----
Remove segment lock for writes
 + Removes CLHM lock on write (expands CHM's lock scope)
 + Removes hashing tricks to match CHM's lock to avoid contention
 + Write queue no longer needs to be a FIFO (speedup tricks)?
 + May make a (minor) speedup.

Currently a strict ordering is required so that the write queue is ordered with 
a segment (e.g. put followed by remove). This ordering constraint could be 
removed by making the logic tolerant to race conditions. This could be done by 
adding a status flag to the WeightedValue.

-----
Replace segment recency queue with AtomicReferenceArray-based circular buffer. 
 + Removes queue tail as a hotspot
 - May be complicated, need to think through
 - The queue doesn't appear to be a bottleneck yet.

A read to a segment may contend on (1) adding to the segment's recency queue, 
(2) incrementing the queue's length. The idea is to reuse (2) as the array slot 
so that only one CAS operation is required. This may not be worth the effort 
since this has not been shown to be a problem and an increase to the 
concurrencyLevel would reduce contention.

-----
Determine the best threshold value.
 + Ideal balance would optimize concurrency vs. memory usage.

Currently the recency threshold is 64. This is a magic number chosen at random 
and it was not evaluated.


Original issue reported on code.google.com by [email protected] on 15 Jun 2010 at 1:12

Make ConcurrentLinkedHashMap available as an OSGi bundle

Please consider to make ConcurrentLinkedHashMap available as an OSGi bundle. 
Any other libraries (or applications) which depend on ConcurrentLinkedHashMap 
and which are already available or are soon planned to become available as OSGi 
bundles will benefit from this.


Complexity / Amount of Change Required:

Should be relatively small. The packaging in the pom.xml file needs to change 
from jar to bundle and the maven-bundle-plugin needs to be configured. See the 
Guava's pom.xml for an example.

Original issue reported on code.google.com by simeon.malchev on 6 Apr 2014 at 1:12

Version 1.1 was released incorrectly

What steps will reproduce the problem?

Compare the SHA1 in 
http://repo1.maven.org/maven2/com/googlecode/concurrentlinkedhashmap/concurrentl
inkedhashmap-lru/1.1/concurrentlinkedhashmap-lru-1.1.jar.sha1 with the SHA1 in 
http://code.google.com/p/concurrentlinkedhashmap/downloads/detail?name=concurren
tlinkedhashmap-lru-1.1.jar&can=2&q=

What is the expected output? What do you see instead?

They should be the same. They are not.

Please provide any additional information below.

Apparently version 1.1 was released twice. The 1.1 JAR that can be downloaded 
from code.google.com is the result of the second release attempt. The one from 
repo1.maven.org is the result of the first release attempt. This is a big no-no 
with Maven. You cannot release the same non-snapshot version more than once. 
Maven ignores subsequent releases. Even worse, there are actual semantic 
differences 
(http://code.google.com/p/concurrentlinkedhashmap/source/detail?r=494) between 
the first 1.1 release and the second. Maven users will miss out on those 
because the Maven repo did not accept the second release and stuck with the 
first one. 

Could you possibly branch the 1.1 tag and release that as 1.1.1?

Original issue reported on code.google.com by [email protected] on 8 Mar 2011 at 8:31

Evict dead nodes first in LRU Eviction Policy

Current Behavior:
At the moment onGet() in EvictionPolicy.LRU() copies the node and inserts
the new node on to the tail of the list (offer). The old node's value is
set to null and it remains at its position. The dead node therefore still
consumes space until it is evicted. I know this by design since you have
chosen a lazy removal strategy.

Desired Behavior:
My idea is to additionally always move the dead nodes to the beginning of
the list. Then they would be evicted first and the user-defined capacity is
unaffected by them. Otherwise, an active node might be evicted because dead
nodes are consuming space which leads to a wrong capacity from the
perspective of the user of the map.

Complexity / Amount of Change Required:
If desired, I can come up with an implementation of this.

Original issue reported on code.google.com by [email protected] on 22 Jan 2009 at 1:46

Excessive memory usage due to padding

Current Behavior:
ConcurrentLinkedHashMap.NCPU is currently set to 
Runtime.getRuntime().availableProcessors().  This value is then used to 
determine the number of read buffers which in turn also affects the size of the 
map in memory.  While the number of available processors is a good default, it 
can cause memory issues if there is a Java application with a relatively small 
max heap size and thousands of map instances that is running on a server with a 
large number of CPUs.


Desired Behavior:
The NCPU value should be overridable with a System property to allow specific 
applications to more directly control the number of read buffers and size of 
the map.


Complexity / Amount of Change Required:
Just change the line of code to

static final int NCPU = Integer.getInteger("concurrentlinkedhashmap.ncpu", 
Runtime.getRuntime().availableProcessors());

Original issue reported on code.google.com by [email protected] on 26 Jan 2015 at 5:15

NPE on keySet()

Email from Greg Luck (ehcache):
-------------------------------

I found a problem while running testConcurrentReadWriteRemove in 
CacheTest. The issues below happen with Second Change and FIFO. I have not 
tested LRU.

NPE when multiple threads concurrently call clear()

I  was getting NPE when multiple threads were calling clear() at the same 
time.

java.lang.NullPointerException
at net.sf.ehcache.concurrent.ConcurrentLinkedHashMap
$EntryIteratorAdapter.next(ConcurrentLinkedHashMap.java:695)
at net.sf.ehcache.concurrent.ConcurrentLinkedHashMap
$EntryIteratorAdapter.next(ConcurrentLinkedHashMap.java:670)
at java.util.AbstractMap$1$1.next(AbstractMap.java:383)
at net.sf.ehcache.concurrent.ConcurrentLinkedHashMap.clear
(ConcurrentLinkedHashMap.java:174)
at net.sf.ehcache.store.MemoryStore.clear(MemoryStore.java:209)
at net.sf.ehcache.store.MemoryStore.removeAll(MemoryStore.java:202)
at net.sf.ehcache.Cache.removeAll(Cache.java:1441)
at net.sf.ehcache.Cache.removeAll(Cache.java:1426)
at net.sf.ehcache.CacheTest$1.execute(CacheTest.java:1778)
at net.sf.ehcache.AbstractCacheTest$1.run(AbstractCacheTest.java:158)
java.lang.NullPointerException


I changed Entry to check for nulls. 

/**
         * {@inheritDoc}
         */
        public Entry<K, V> next() {
            current = iterator.next();

            K key = current.getKey();
            Node<K, V> currentNode = current.getValue();
            V value;
            if (currentNode != null) {
                value = currentNode.getValue();
            } else {
                value = null;
            }
            return new SimpleEntry<K, V>(key, value);
        }

ArrayIndexOutOfBoundsException on toArray()

This one happens when I do map.keySet().

SEVERE: Throwable java.lang.ArrayIndexOutOfBoundsException: 3 3
java.lang.ArrayIndexOutOfBoundsException: 3
at java.util.AbstractCollection.toArray(AbstractCollection.java:126)
at net.sf.ehcache.store.MemoryStore.getKeyArray(MemoryStore.java:296)
at net.sf.ehcache.Cache.getKeys(Cache.java:1087)
at net.sf.ehcache.CacheTest$5.execute(CacheTest.java:1802)
at net.sf.ehcache.AbstractCacheTest$1.run(AbstractCacheTest.java:158)
Feb 15, 2009 7:03:00 PM net.sf.ehcache.AbstractCacheTest$1 run
SEVERE: Throwable java.lang.ArrayIndexOutOfBoundsException: 422 422
java.lang.ArrayIndexOutOfBoundsException: 422
at java.util.AbstractCollection.toArray(AbstractCollection.java:126)
at net.sf.ehcache.store.MemoryStore.getKeyArray(MemoryStore.java:296)
at net.sf.ehcache.Cache.getKeys(Cache.java:1087)
at net.sf.ehcache.CacheTest$5.execute(CacheTest.java:1802)
at net.sf.ehcache.AbstractCacheTest$1.run(AbstractCacheTest.java:158)
Feb 15, 2009 7:03:01 PM net.sf.ehcache.AbstractCacheTest$1 run

ConcurrentHashMap overrides this. CLHM does not which I think is the 
problem.

Original issue reported on code.google.com by [email protected] on 26 Feb 2009 at 2:04

ConcurrentMapTest fails

What steps will reproduce the problem?

1. Last Changed Rev: 346
2. java version "1.6.0_19"
   Java(TM) SE Runtime Environment (build 1.6.0_19-b04)
   Java HotSpot(TM) 64-Bit Server VM (build 16.2-b04, mixed mode)
   Vista 64-bit dual core
3. ant run-tests

What is the expected output? What do you see instead?

   [testng] ===============================================
   [testng] Ant suite
   [testng] Total tests run: 43, Failures: 1, Skips: 0
   [testng] ===============================================


Please use labels and text to provide additional information.

TestNG files attached.

Original issue reported on code.google.com by [email protected] on 9 Apr 2010 at 2:08

Attachments:

Bloom filter example has mistakes in setAt: div and mod by Long.SIZE instead

What steps will reproduce the problem?
1.Take example class and instantiate a BloomFilter<String>(1000,1.0F/1000)
2.add more than about 13 strings to it.
3.spot other adds fail.  print out final state using toString

What is the expected output? What do you see instead?
All bits will show as 1, not many, 64 of them (no surprise)

What version of the product are you using? On what operating system?


Please provide any additional information below.
Need to fix setAt as follows:
    private boolean setAt(int index, long[] words) { 
        //int i = index/bits; 
        //int bitIndex = index % bits; 
        int i = index/Long.SIZE; // bits/length = Long.SIZE, see contructor, above, so use that here.
        int bitIndex = index % Long.SIZE;  // ditto.....

Commented out bad old lines.

Should now work.
  - kieron

Original issue reported on code.google.com by [email protected] on 25 Jul 2012 at 4:23

Identity problem

java.util.concurrent.ConcurrentMap api:

boolean remove(Object key,Object value)
Remove entry for key only if currently mapped to given value. Acts as 

  if ((map.containsKey(key) && map.get(key).equals(value)) {
     map.remove(key);
     return true;
 } else return false;

can the parameter value be a Identity ?


Original issue reported on code.google.com by [email protected] on 15 Sep 2009 at 9:01

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.