Giter VIP home page Giter VIP logo

mapdb's People

Contributors

addvilz avatar ajermakovics avatar atoulme avatar batterseapower avatar bonifaido avatar civanyp avatar cloud-fan avatar flaktack avatar gitter-badger avatar gkorland avatar jankotek avatar konrader avatar lpellegr avatar masudshrabon avatar mebigfatguy avatar mfriedenhagen avatar minborg avatar mnavarro-ob avatar pdroalves avatar pettermahlen avatar rodiongork avatar rtreffer avatar shabanovd avatar sleimanjneidi avatar tea-dragon avatar toilal avatar tquellenberg avatar uweschaefer avatar vbauer avatar wangpeidong avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mapdb's Issues

Basic network communication support

MapDB should have basic support for communication over network socket. We need two Engine wrappers, one for server side and one for client side. MapDB will define an efficient binary protocol for socket communication.

It will not be full server/client architecture. Instead it will use already established connection supplied by user. User will have to deal himself with things like establishing connection, timeouts, authentication...

NPE when calling first() and last() on tree set

When I run this, I get an NPE. After fixing it in first(), I get an NPE when calling last().

        DB db = DBMaker.newMemoryDB().make();
        final NavigableSet<String> testSet = db.getTreeSet("testSet");

        testSet.add("1first");
        testSet.add("2second");
        testSet.add("3third");
        testSet.add("4fourth");

        db.commit();

        assertEquals("1first", testSet.first());
        assertEquals("4fourth", testSet.last());

Add snapshot isolation

MapDB needs snapshot isolation to be competitive as DB engine. Snapshots should be independent of transactions, it should work even with transactions disabled.

I have simple solution outlined, it should fit into existing code without larger refactoring.

Serialization: support for writeObject and readObject

MapDB Serialization should be drop-in replacement for Java Serialization. So for example all classes has to implement Serializable marker interface to be serializable by MapDB.

However Java Serialization is very complex and it is nearly impossible to implement complete support for all its features. Most important right now is to support Externalizable and private writeObject readObject methods. In some details it is described here.

Without support for this methods, MapDB will fail to efficiently serialize many 3td party classes.

DB.isClosed method

It would be nice to provide a DB.isClosed method to avoid multiple calls to close, when it is performed in a shutdown hook.

Problems using keyset in HashMap

When traversing a keyset using HashMap I encounter a never finishing loop.
Entries seems to 'shift' while traversing the keyset.

for(String machineKey : collection.keySet()){
//Actions here
}

Made a mistake, the infinite loop occurs while using the HashMap:

DB db = DBMaker.newTempFileDB().cacheSize(1000).make();
ConcurrentMap<String,MachineVerloopData> verloopData = db.createHashMap("verloopdata",
null, new MachineVerloopDataSerializer());

Exception in thread "JDBM writer" java.lang.OutOfMemoryError: Direct buffer memory

Getting this exception in some code I'm running:

Exception in thread "JDBM writer" java.lang.OutOfMemoryError: Direct buffer memory
    at java.nio.Bits.reserveMemory(Bits.java:658)
    at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123)
    at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306)
    at org.mapdb.Volume$MemoryVolume.makeNewBuffer(Volume.java:376)
    at org.mapdb.Volume$ByteBufferVolume.ensureAvailable(Volume.java:181)
    at org.mapdb.StorageDirect.freePhysRecTake(StorageDirect.java:284)
    at org.mapdb.StorageDirect.recordUpdate(StorageDirect.java:131)
    at org.mapdb.AsyncWriteEngine.writerThreadRun(AsyncWriteEngine.java:100)
    at org.mapdb.AsyncWriteEngine.access$000(AsyncWriteEngine.java:36)
    at org.mapdb.AsyncWriteEngine$1.run(AsyncWriteEngine.java:65)

Using mapdb configured as:

DB db = DBMaker.newDirectMemoryDB()
    .transactionDisable()
    .asyncFlushDelay(100)
    .make();

return db.createTreeMap(name, 120, false, null, null, null);

When I have time I'll create an example that I can load up but right now it's causing all kinds of greif.

If you have any ideas before then let me know.

This error is showing up on OSX JDK7 and Linux JDK7

BTreeMap: add online defragmentation

Concurrent B-Linked-Tree used in MapDB does not support key removals efficiently. Key is just removed from node without collapsing and deleting empty nodes. So large key removals will leave empty tree nodes behind and affect performance.

Right now it is not large problem, but it has to be fixed in long run.

Tree defragment algorithm is described here

Some problems with BTreeMap and array values

Hi, i don't know why but when I use BTreeMap<Integer, String[]>, if the size of the map increase, it return an Object for some records resulting in a ClassCastException.

For example with this code:

    DB db = DBMaker.newFileDB(new File("cache"))
            .closeOnJvmShutdown()
            .make();

    ConcurrentNavigableMap<Integer, String[]> map = db.getTreeMap("1");

    for (int i = 0; i < 50000; i++) {
        map.put(i, new String[5]);
    }

    for (int i = 0; i < 50000; i=i+1000) {
        System.out.println(i+" "+map.get(i));
    }

    System.exit(0);

the output is:
...............................................................
45000 [Ljava.lang.String;@46cfd5ee
46000 [Ljava.lang.Object;@76e62093
47000 [Ljava.lang.String;@7e64cfe0
.................................................................

Storage: Compare and Swap update

MapDB storage needs to be able to perform record updates atomically. This can be done by atomic CAS operation. CAS is necessary for Concurrent Linked Queues and other concurrent algorithms.

CAS operation is not usually used in DBs, but it makes sense to implement it.

HugeInsert does not work

Hi,
I am running the example HugeInsert and after 44% I get either a NullPointerException or an ArrayIndexOutOfBoundException.

Most of the time I get a NullPointerException and all I can say is that I get recordGet2 in RecordStoreAbstract that returns null because the indexValue is null.

The code is still new to me so I am not sure why it happens but at least I can reproduce the error by running this test "HugeInsert"

AsyncWriteEngineTest ever finishing?

I have snapshot 0.9 (git 33a8269) here and whenever I run "mvn install", the build process gets stuck for longer than I want to wait while showing "Running org.mapdb.AsyncWriteEngineTest". The other tests only take a few seconds maximum, so I kill the build after 5 minutes max.
Happens on a linux 32 bit machine with both Oracle Java 1.6 and 1.7.
Is this a known problem or am I doing something wrong?

Parallel collections

Scala has parallel collections. Since MapDB collections can be huge it makes sense to implement something like this.

Scala Parallel Collections supports operations such as map, reduce, filter, contains executed in parallel. Fork/Join framework (used by Scala) needs to divide collection into parts to work efectively. So parallel operations are usually performed on indexed array style collection, where splitting into parts is easy. MapDB could use internal tree node structure to achive this split. My estimate is that internal Fork/Join framework would be 30% faster than similar parallel solution which would rely on iterators.

Scala parallel collections have about 1 MB. MapDB solution should fit into 50KB.

Maximal record size is 64KB

Currently MapDB store supports records with maximal serialized size 64KB. BTree usually stores multiple key and values in single record (32 by default) and this puts practical limit somewhere around 1KB.

We need to implement linked-records to support record size up to 1GB.

Storage: backup to zip

MapDB needs some kind of backup mechanism.

Probably best is to add method which would compress entire store and save it as zip file. For now backup process will get exclusive lock over database and will block other updates while it runs. Latter we will upgrade backup code to use snapshot.

Compression should use multiple threads.

Massive performance slowdown migrating from JDBM3

Any chance of making async writes optional? We're trying MapDB as a replacement for JDBM3 and we're seeing massive performance hits due to the fact that we use many, many (10-100k) distinct DB instances on a per-thread basis, with the startup of writerThread/preallocThread consuming something like 57% of process time. On our load test we go from 18 seconds using JDBM3 to almost 14 minutes, with 25% of cpu time in Thread.start() and another 20% in LockSupport.parkNanos (apparently used within writerThread.run())

Map entry expiration after timeout.

MapDB will be used in future as caching solution. For this case we should be able to automatically remove keys from Map, if they are not used over some time period.

Found setting where I lose part of a List value 'silently'

I store LinkedList values with String keys and sadly, more or less by accident, I recognized that jdbm/mapdb loses a single entry from a LinkedList that I stored. I tried to reproduce this in a small snippet which was not so easy, because most of the times everything worked well - but now I have a stable setting (see below). The snippet runs on a linux 64 bit jdk 1.7.0_09-b05. I started with the jdbm3-alpha4, tried 5, 3-SNAPSHOT, and now I use the MapDB-0.9-SNAPSHOT maven snapshot. No difference, beside some new Exception with MapDB, also look to the snippet for this.
I guess it could be something with the serialization, because the behaviour is only seen after writing a certain amount of entries. Here is the according snippet:

    // without appendOnlyEnable()
    DB jdbmDB = DBMaker.newTempFileDB().asyncWriteDisable().closeOnJvmShutdown().deleteFilesAfterClose().journalDisable().make();

    // with appendOnlyEnable()
    // DB jdbmDB = DBMaker.newTempFileDB().asyncWriteDisable().closeOnJvmShutdown().deleteFilesAfterClose().journalDisable().appendOnlyEnable().make();
    // with TreeMap, the second list value is lost
    Map<String, LinkedList<String>> map = jdbmDB.getTreeMap("testMap");
    // with HashMap, all List values are lost
    // Map<String, LinkedList<String>> map = jdbmDB.getHashMap("testMap");

    // with asyncWrite - ConcurrentModificationException at SerializerBase.serializeCollection(SerializerBase.java:498)
    // Map<String, LinkedList<String>> map =DBMaker.newTempTreeMap();

    int iLoops = 2000000;
    for (int i = 0; i < iLoops; i++)
    {
        String strRandomKey = String.valueOf(Math.random());
        LinkedList<String> llVals4RandomKey = map.get(strRandomKey);
        if(llVals4RandomKey == null)
        {
            llVals4RandomKey = new LinkedList<String>();
            map.put(strRandomKey, llVals4RandomKey);
        }
        llVals4RandomKey.add(UUID.randomUUID().toString());


        if(i == (int) (iLoops * 0.9))
        {
            System.out.println("insert first");
            LinkedList<String> llFirstValue = new LinkedList<String>();
            map.put("ourKey", llFirstValue);
            llFirstValue.add("firstValue");
        }
        if(i == (int) (iLoops * 0.97))
        {
            System.out.println("insert second");
            LinkedList<String> llValues = map.get("ourKey");
            llValues.add("secondValue");
        }

        if(i % 100000 == 0) System.out.println(i);
    }


    System.out.println(map.get("ourKey"));
    if(map.get("ourKey").size() < 2)
        System.err.println("some value is lost :(");
    else
        System.out.println("everything seems to be fine...");

    jdbmDB.close();

NotSerializableException with more than x elements

Hi, i think there is a problem because when I put only few elements the map works good but when this number grows i get a NotSerializableException on the same object.

Example

    DB db = DBMaker.newTempFileDB().make();
    ConcurrentNavigableMap<Integer, AMatrix> map = db.getTreeMap("map");
    int[][] matrix = new int[2][2];
    for (int i = 0; i < 1000; i++) {
        System.out.println(i);
        map.put(1, new AMatrix(matrix));
    }

Edit.. my mystake, works good!
ps is much more faster with a wrapper class.

Problem with DB.createHashSet(String name, Serializer<K> serializer)

In createHashSet with a custom serializer argument, there seems to be a problem.

The createHashSet method should create the HTreeMap for the HashSet with parameter hasValues to false as in the method getHashSet(String name) - but in fact sets the value to true. Creating the HashSet works, but adding objects to it fails with

java.lang.UnsupportedOperationException
at org.mapdb.HTreeMap$3.add(HTreeMap.java:712)

as the hasValues attribute is set to true during creation (and is used to check if add is a supported operation).

Add Cache hit/miss statistics

MapDB should provide hit and miss statistics for Instance Caches. This is important for performance tuning and choosing right cache settings.

Secondary_Key example fails when NavigableSet is replaced with MapDB implementation

Hello,

I'm trying to use Secondary_Key.java example to store both collections (main and secondary indexes) off-heap.

// replaced from     valueHash = new TreeSet<...>(); 
        NavigableSet<Fun.Tuple2<Integer,Long>> valueHash =
                DBMaker.newTempTreeSet();

The following exception is thrown:

Exception in thread "main" java.lang.NullPointerException
at org.mapdb.BTreeMap.findLarger(BTreeMap.java:1206)
at org.mapdb.BTreeMap$SubMap.firstEntry(BTreeMap.java:1765)
at org.mapdb.BTreeMap$SubMap$Iter.(BTreeMap.java:1951)
at org.mapdb.BTreeMap$SubMap$1.(BTreeMap.java:1976)
at org.mapdb.BTreeMap$SubMap.keyIterator(BTreeMap.java:1976)
at org.mapdb.BTreeMap$KeySet.iterator(BTreeMap.java:1412)
at org.mapdb.Bind$1.iterator(Bind.java:23)
at examples.Secondary_Key.main(Secondary_Key.java:43)
...

Android support mentioned but nonexistent

"works equally well on an Android phone and a supercomputer with multi-terabyte storage." but in reality there is no way to run MapDB on Dalvik. Multiple usages of sun-specific APIs makes MapDB incompatible with Android.

ConcurrentModificationException

Hello,

Since a few days I get a ConcurrentModificationException when I use MapDB.

The configuration is the following:

DBMaker.newFileDB(new File(dbFilename))
                        .cacheSoftRefEnable()
                        .closeOnJvmShutdown()
                        .deleteFilesAfterClose()
                        .journalDisable()
                        .make();

The stacktrace is the following.

Caused by: java.lang.RuntimeException: Writer Thread failed with an exception.
    at org.mapdb.AsyncWriteEngine.checkAndStartWriter(AsyncWriteEngine.java:186)
    at org.mapdb.AsyncWriteEngine.update(AsyncWriteEngine.java:220)
    at org.mapdb.EngineWrapper.update(EngineWrapper.java:52)
    at org.mapdb.SnapshotEngine.update(SnapshotEngine.java:53)
    at org.mapdb.CacheWeakSoftRef.update(CacheWeakSoftRef.java:147)
    at org.mapdb.HTreeMap.put(HTreeMap.java:484)
    at org.mapdb.HTreeMap.putIfAbsent(HTreeMap.java:1049)
    at fr.inria.eventcloud.proxies.SubscribeProxyImpl.markAsDelivered(SubscribeProxyImpl.java:816)
    at fr.inria.eventcloud.proxies.SubscribeProxyImpl.reconstructCompoundEvent(SubscribeProxyImpl.java:683)
    at fr.inria.eventcloud.proxies.SubscribeProxyImpl.receiveSbce1Or2(SubscribeProxyImpl.java:637)
    at sun.reflect.GeneratedMethodAccessor61.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:601)
    at org.objectweb.proactive.core.mop.MethodCall.execute(MethodCall.java:411)
    at org.objectweb.proactive.core.component.request.ComponentRequestImpl.serveInternal(ComponentRequestImpl.java:186)
    ... 8 more
Caused by: java.util.ConcurrentModificationException
    at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:819)
    at java.util.ArrayList$Itr.next(ArrayList.java:791)
    at org.mapdb.SerializerPojo$1.serialize(SerializerPojo.java:41)
    at org.mapdb.SerializerPojo$1.serialize(SerializerPojo.java:36)
    at org.mapdb.StorageDirect.update(StorageDirect.java:136)
    at org.mapdb.AsyncWriteEngine$1.run(AsyncWriteEngine.java:103)

LongHashMap not working

import org.mapdb.LongHashMap;
public class LongHashMapTest {
    public static void main(String[] args) {
        LongHashMap<String> t = new LongHashMap<String>();
        t.put(6447459, "aa");
        t.put(6382177, "bb");
        System.out.println(t.get(6447459)); // aa
        System.out.println(t); // LongHashMap[6382177 => bb, 6382177 => bb]
    }
}

The output should be something like:
LongHashMap[6447459 => aa, 6382177 => bb]
Iterator also does not work.
Smaller values work.
(java version: win32 1.7.0_09-b05 and linux x64)

InternalError with BTreeMap.subMap().entrySet()

I'm getting an InternalError when trying to use the subMap method of BTreeMap:

java.lang.InternalError
    at org.mapdb.BTreeMap.makeEntry(BTreeMap.java:918)
    at org.mapdb.BTreeMap.findLarger(BTreeMap.java:1207)
    at org.mapdb.BTreeMap$SubMap.firstEntry(BTreeMap.java:1762)
    at org.mapdb.BTreeMap$SubMap$Iter.<init>(BTreeMap.java:1948)
    at org.mapdb.BTreeMap$SubMap$3.<init>(BTreeMap.java:1994)
    at org.mapdb.BTreeMap$SubMap.entryIterator(BTreeMap.java:1994)
    at org.mapdb.BTreeMap$EntrySet.iterator(BTreeMap.java:1521)

Also, toString on a subMap seems to emit all key/value pairs.

Add Concurrent Deque

MapDB needs #subj# to be usable for messaging frameworks. We will probably implement more than one to support various scenarios (Deque, Queue, bounded, unbounded...).

Implementation should be simple rewrite of java.concurrent.util classes to use MapDB storage. For this MapDB must support compare and swap operation at storage level.

Add Storage Statistics

JDBM3 provided detail statistics about storage fragmentation, free space and average records size. This stats are very usefull in long run for DB maintenance.

NullPointerException in CacheLRU

I'm having some trouble storing a large collection of data using
MapDB-0.9.
Here's the context:

  • Key: Integer, Value: String[];
  • Before inserting a pair <key, value> I call the get in order to
    join the new value with the one that was already stored;
  • I'm using a concurrency library (conja). The 2 threads I'm using
    could access the same map. However, I'm using the synchronized keyword
    over the map object, enclosing the whole insertion process - both the
    get and the put;
  • Using HTreeMap;
  • This is how I'm creating the DB:
    db = DBMaker
    .newFileDB(new File(path))
    .cacheLRUEnable()
    .journalDisable()
    .closeOnJvmShutdown()
    .make();

This is the StackTrace I get:
[java] java.lang.NullPointerException
[java] at org.mapdb.CacheLRU.get(CacheLRU.java:41)
[java] at org.mapdb.HTreeMap.get(HTreeMap.java:364)
[java] at
reaction.storage.PersistentStorage.get(PersistentStorage.java:41)
[java] at reaction.storage.StorageLSH.putObject(StorageLSH.java:
105)
[java] at reaction.MainStoreInfo$1.apply(Unknown Source)
[java] at reaction.MainStoreInfo$1.apply(Unknown Source)
[java] at com.davidsoergel.conja.Parallel
$1.performAction(Parallel.java:29)
[java] at com.davidsoergel.conja.Parallel$ForEach
$1.run(Parallel.java:155)
[java] at java.util.concurrent.Executors
$RunnableAdapter.call(Executors.java:441)
[java] at java.util.concurrent.FutureTask
$Sync.innerRun(FutureTask.java:303)
[java] at java.util.concurrent.FutureTask.run(FutureTask.java:
138)
[java] at
com.davidsoergel.conja.ComparableFutureTask.run(ComparableFutureTask.java:
113)
[java] at java.util.concurrent.ThreadPoolExecutor
$Worker.runTask(ThreadPoolExecutor.java:886)
[java] at java.util.concurrent.ThreadPoolExecutor
$Worker.run(ThreadPoolExecutor.java:908)
[java] at java.lang.Thread.run(Thread.java:680)

Thank you in advance for your attention.
Rui Silva

NullPointerException with CacheWeakSoftRef#close

I get a NullPointerException during the close method on a MapDB instance initialized with cacheSoftRefEnable, closeOnJvmShutdown, deleteFilesAfterClose and journalDisable.

Currently, I haven't succeeded to reproduce the issue on a simple test case. However, the stacktrace is the following:

Exception in thread "Thread-120" java.lang.NullPointerException
  at org.mapdb.CacheWeakSoftRef.close(CacheWeakSoftRef.java:200)
  at org.mapdb.DB.close(DB.java:298)
  at ...
  at java.lang.Thread.run(Thread.java:722)

The issue seems due to the queueThread thread that is terminated before the call to CacheWeakSoftRef#close is executed.

NPE when running MassiveInsert test

I modified the MassiveInsert test to insert a max of 500M records, but along the way a NullPointerException was thrown:

10000000 - 9 - 1088376 rec/sec
100000000 - 100 - 997396 rec/sec
Exception in thread "pool-1-thread-3" java.lang.NullPointerException
at org.mapdb.BTreeMap.put2(BTreeMap.java:540)
at org.mapdb.BTreeMap.put(BTreeMap.java:495)
at benchmark.MassiveInsert$1.run(MassiveInsert.java:54)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)

This is the line of code which throws the exception:

Object oldVal =  (hasValues? A.vals()[pos] : Utils.EMPTY_STRING);

When I ran the test again, it finished without any issues. Looks like a concurrency race condition.

10000000 - 9 - 1022390 rec/sec
100000000 - 90 - 1100097 rec/sec
DONE
500000000 - 3113 - 160581 rec/sec

DBMaker and DB helper classes

I am proposing to start writing some tasks for this new version of JDBM.

I haven't looked at the code yet but I propose to start by rewriting the helper classes as it was in JDBM3.

We can discuss if we see some improvement that can be made.

Hash collision protection

Most libraries (including j.u.HashMap) are vulnerable to hash collision attack. MapDB could be indirectly exposed to internet, so it should be immune to this attack. Solution is to add randomly generated hash salt into each store, and use salt in all hash operations.

Fix Javadoc

Javadoc is currently mess. It needs some serious polishing and updates.

Delta compression serializers for BTree Keys

BTree keys are stored ordered in nodes. It is possible to save lot of space if only difference between keys is stored.

JDBM3 applied compression automatically, but this required complex code and could not be modified by user. Also checks done for automatic Delta comp were slowing down BTree inserts/updates

In MapDB delta compression is done by supplying custom BTree Key Serializer. We need custom serializers for strings, byte[], BigNumber and other types.

This is 'enhancement', but should be treated as 'bug' since it has serious impact on performance tests.

Thread deadlocking

I understand JDBM4 is very early in the development process but was testing it out to see if just my one use-case (I need an arbitrarily large sorted map) would be served well.

My problem is that adding records will lock up some inner thread (write thread?) after a certain point.

I created this unit test to try to see if JDMB4 could handle millions of records and played with several of the DBMaker options but cannot get it to complete without locking up.

    private static final int COUNT = 10_000_000;

    @Test
    public void testJDBM() {
        DB db = DBMaker.newTempFileDB().make();
        SortedMap map = db.getTreeMap("treemap");

        for (int i = 0; i < COUNT; i++) {
            if (i % 100_000 == 0) {
                System.out.println("commit " + i);
                db.commit();
            }
            map.put(i, (double) i);
        }
        for (int i = 0; i < COUNT; i++) {
            if (i % 100_000 == 0) {
                System.out.println("read " + i);
                db.commit();
            }
            Assert.assertEquals(map.get(i), (double) i);
        }
    }

Collection binding

JDBM2 had secondary keys and secondary collections. MapDB should have something similar called 'collection binding'. Basically it is Map Modification Listener which keeps secondary collection in sync.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.