Giter VIP home page Giter VIP logo

java-chronicle's Introduction

NOTE

This repo is old and out of date. I suggest you use version 3.x from https://github.com/OpenHFT/Chronicle-Queue

#Chronicle This library is an ultra low latency, high throughput, persisted, messaging and event driven in memory database. The typical latency is as low as 80 nano-seconds and supports throughputs of 5-20 million messages/record updates per second.

This library also supports distributed, durable, observable collections (Map, List, Set) The performance depends on the data structures used, but simple data structures can achieve throughputs of 5 million elements or key/value pairs in batches (eg addAll or putAll) and 500K elements or key/values per second when added/updated/removed individually.

It uses almost no heap, trivial GC impact, can be much larger than your physical memory size (only limited by the size of your disk) and can be shared between processes with better than 1/10th latency of using Sockets over loopback. It can change the way you design your system because it allows you to have independent processes which can be running or not at the same time (as no messages are lost) This is useful for restarting services and testing your services from canned data. e.g. like sub-microsecond durable messaging. You can attach any number of readers, including tools to see the exact state of the data externally. e.g. I use; od -t cx1 {file} to see the current state.

#Example

public static void main(String... ignored) throws IOException {
    final String basePath = System.getProperty("java.io.tmpdir") + File.separator + "test";
    ChronicleTools.deleteOnExit(basePath);
    final int[] consolidates = new int[]{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16};
    final int warmup = 500000;
    final int repeats = 20000000;
    //Write
    Thread t = new Thread(new Runnable() {
        @Override
        public void run() {
            try {
                final IndexedChronicle chronicle = new IndexedChronicle(basePath);
                chronicle.useUnsafe(true); // for benchmarks.
                final Excerpt excerpt = chronicle.createExcerpt();
                for (int i = -warmup; i < repeats; i++) {
                    doSomeThinking();
                    excerpt.startExcerpt(8 + 4 + 4 * consolidates.length);
                    excerpt.writeLong(System.nanoTime());
                    excerpt.writeUnsignedShort(consolidates.length);
                    for (final int consolidate : consolidates) {
                        excerpt.writeStopBit(consolidate);
                    }
                    excerpt.finish();
                }
                chronicle.close();
            } catch (Exception e) {
                e.printStackTrace();
            }
        }

        private void doSomeThinking() {
            // real programs do some work between messages
            // this has an impact on the worst case latencies.
            Thread.yield();
        }
    });
    t.start();
    //Read
    final IndexedChronicle chronicle = new IndexedChronicle(basePath);
    chronicle.useUnsafe(true); // for benchmarks.
    final Excerpt excerpt = chronicle.createExcerpt();
    int[] times = new int[repeats];
    for (int count = -warmup; count < repeats; count++) {
        while (!excerpt.nextIndex()) {
        /* busy wait */
        }
        final long timestamp = excerpt.readLong();
        long time = System.nanoTime() - timestamp;
        if (count >= 0)
            times[count] = (int) time;
        final int nbConsolidates = excerpt.readUnsignedShort();
        assert nbConsolidates == consolidates.length;
        for (int i = 0; i < nbConsolidates; i++) {
            excerpt.readStopBit();
        }
        excerpt.finish();
    }
    Arrays.sort(times);
    for (double perc : new double[]{50, 90, 99, 99.9, 99.99}) {
        System.out.printf("%s%% took %.1f µs, ", perc, times[((int) (repeats * perc / 100))] / 1000.0);
    }
    System.out.printf("worst took %d µs%n", times[times.length - 1] / 1000);
    chronicle.close();
}

prints an output like (note: this test does 20 million in a matter of seconds and the first half a million is for warming up)

50.0% took 0.3 µs, 90.0% took 0.4 µs, 99.0% took 33.5 µs, 99.9% took 66.9 µs, 99.99% took 119.7 µs, worst took 183 µs
50.0% took 0.4 µs, 90.0% took 0.5 µs, 99.0% took 0.6 µs, 99.9% took 9.3 µs, 99.99% took 60.1 µs, worst took 883 µs
50.0% took 0.3 µs, 90.0% took 0.4 µs, 99.0% took 0.6 µs, 99.9% took 21.9 µs, 99.99% took 62.0 µs, worst took 234 µs
50.0% took 0.3 µs, 90.0% took 0.4 µs, 99.0% took 0.6 µs, 99.9% took 9.3 µs, 99.99% took 55.8 µs, worst took 199 µs

#Support Group https://groups.google.com/forum/?fromgroups#!forum/java-chronicle

#Software used to Develop this package YourKit 11.x - http://www.yourkit.com - If you don't profile the performance of your application, you are just guessing where the performance bottlenecks are.

IntelliJ CE - http://www.jetbrains.com - My favourite IDE.

#Version History

Version 1.8 - Add MutableDecimal and FIX support.

Version 1.7.1 - Bug fix and OGSi support. Sonar and IntelliJ code analysis - thank you, Mani. Add appendDate and appendDateTime Improved performance for appendLong and appendDouble (Thank you Andrew Bissell)

Version 1.7 - Add support to the DataModel for arbitrary events to be sent such as timestamps, heartbeats, changes in stages which can picked up by listeners. Add support for the DataModel for arbitrary annotations on the data so each map/collection can have additional configuration Add ConfigProperties which is scoped properties i.e. a single Properties file with a rule based properties.

Version 1.6 - Distributed, durable, observable collections, List, Set and Map. Efficient serialization of Java objects. Java Serialization provided for compatibility but use of Externalizable preferred. Minimisation of virtual memory for 32-bit platforms.

Version 1.5 - Publishing Chronicle over TCP. Note: the package has changed to com.higherfrequencytrading to support publishing to maven central.

Version 1.4 - Reading/writing enumerated types, Enum and generic. Improve replication performance and memory usage (esp for large excerpts) Removed the requirement to provide a DataSizeBitsHint when it wasn't clear what this was for. Add ChronicleTest to support testing. Add support for parsing text to compliment existing appending of text.

Version 1.3 - Minor improvements.

Version 1.2 - Fixed a bug in the handling of writeBoolean.

Version 1.1 - Add support for OutputStream and InputStream required by ObjectOutputStream and ObjectInputStream. Using Java Serialization is not suggested as its relatively slow, but sometimes its your only option. ;)

Version 1.0 - First formal release available in https://github.com/peter-lawrey/Java-Chronicle/tree/master/repository

Version 0.5.1 - Fix code to compile with Java 6 update 31. (Previously only Java 7 was used)

Version 0.5 - Add support for replication of a Chronicle over TCP to any number of listeners, either as a component or stand alone/independent application. Uses ChronicleSource and ChronicleSink. Add ChronicleReader to read records as they are added as text. (like less)

Version 0.4 - Add support for writing text to the log file without creating garbage via com.higherfrequencytrading.chronicle.ByteStringAppender interface. Useful for text logs.

Version 0.3.1 - Add support for 24-bit int and 48-bit long values.

Version 0.3 - Add support for unsigned byte, short and int. Add support for compacted short, unsigned short, int, unsigned int, long and double types. (Type will use half the size for small values otherwise 50% more)

Version 0.2 - Add support for a 32-bit unsigned index. IntIndexedChronicle. This is slightly slower on a 64-bit JVM, but more compact. Useful if you don't need more than 4 GB of data.

Version 0.1 - Can read/write all basic data types. 26 M/second (max) multi-threaded.

It uses memory mapped file to store "excerpts" of a "chronicle" Initially it only supports an indexed array of data.

#Performance

###Throughput Test - FileLoggingMain https://github.com/peter-lawrey/Java-Chronicle/blob/master/testing/src/main/java/com/higherfrequencytrading/chronicle/impl/FileLoggingMain.java

This test logs one million lines of text using Chronicle compared with Logger.

To log 1,000,000 messages took 0.234 seconds using Chronicle and 7.347 seconds using Logger

###Throughput Test - IndexedChronicleThroughputMain

Note: These timings include Serialization. This is important because many performance tests don't include Serialization even though it can be many times slower than the data store they are testing.

https://github.com/peter-lawrey/Java-Chronicle/blob/master/testing/src/main/java/com/higherfrequencytrading/chronicle/impl/IndexedChronicleLatencyMain.java

On a 4.6 GHz, i7-2600, 16 GB of memory, Fast SSD drive. Centos 5.7.

The average RTT latency was 175 ns. The 50/99 / 99.9/99.99%tile latencies were 160/190 / 2,870/3,610 - ByteBuffer (tmpfs) The average RTT latency was 172 ns. The 50/99 / 99.9/99.99%tile latencies were 160/190 / 2,780/3,520 - Using Unsafe (tmpfs)

The average RTT latency was 180 ns. The 50/99 / 99.9/99.99%tile latencies were 160/190 / 3,110/19,110 - ByteBuffer (ext4) The average RTT latency was 178 ns. The 50/99 / 99.9/99.99%tile latencies were 160/190 / 3,100/19,090- Using Unsafe (ext4)

https://github.com/peter-lawrey/Java-Chronicle/blob/master/testing/src/main/java/com/higherfrequencytrading/chronicle/impl/IndexedChronicleThroughputMain.java

On a 4.6 GHz, i7-2600, 16 GB of memory, Fast SSD drive. Centos 5.7.

Took 12.416 seconds to write/read 200,000,000 entries, rate was 16.1 M entries/sec - ByteBuffer (tmpfs) Took 9.185 seconds to write/read 200,000,000 entries, rate was 21.8 M entries/sec - Using Unsafe (tmpfs)

Took 25.693 seconds to write/read 400,000,000 entries, rate was 15.6 M entries/sec - ByteBuffer (ext4) Took 19.522 seconds to write/read 400,000,000 entries, rate was 20.5 M entries/sec - Using Unsafe (ext4)

Took 71.458 seconds to write/read 1,000,000,000 entries, rate was 14.0 M entries/sec - Using Unsafe (ext4) Took 141.424 seconds to write/read 2,000,000,000 entries, rate was 14.1 M entries/sec - Using Unsafe (ext4)

Note: in the last test, it is using 112 GB! of dense virtual memory in Java without showing a dramatic slow down or performance hit.

The 14.1 M entries/sec is close to the maximum write speed of the SSD as each entry is an average of 28 bytes (with the index) => ~ 400 MB/s

More compact Index for less than 4 GB of data

https://github.com/peter-lawrey/Java-Chronicle/blob/master/testing/src/main/java/com/higherfrequencytrading/chronicle/impl/IntIndexedChronicleThroughputMain.java

on a 4.6 GHz, i7-2600 Took 6.325 seconds to write/read 100,000,000 entries, rate was 15.8 M entries/sec - ByteBuffer (tmpfs) Took 4.590 seconds to write/read 100,000,000 entries, rate was 21.8 M entries/sec - Using Unsafe (tmpfs)

Took 7.352 seconds to write/read 100,000,000 entries, rate was 13.6 M entries/sec - ByteBuffer (ext4) Took 5.283 seconds to write/read 100,000,000 entries, rate was 18.9 M entries/sec - Using Unsafe (ext4)

java-chronicle's People

Contributors

abissell avatar cobr123 avatar craigday avatar ctasada avatar falconair avatar jerrinot avatar jijunwang avatar jkubrynski avatar kenota avatar mingfang avatar neomatrix369 avatar peter-lawrey avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

java-chronicle's Issues

IndexedChronicle doesn't close data channel

hi Peter, your library is great! I only found a small bug for far in clearAll() method on IndexedChronicle class, it should be closing param channel and due to type always closing indexChannel instead.

read/write in chronicle queue using pool

Hi Peter. I am a new user of chronicle queue and I need to use a zero allocation strategy to read and write the objects from chronicle queue. I want to use a queue and a marshable implementation of bytes like a pojo class is the that some correct strategy?

AbstractExcerpt.skipBytes() doesn't skip

on read-only excerpt with 0 capacity skipBytes() ignores input and doesn't skip at all. This can be avoided by removing defensive code:

@Override
public int skipBytes(int n) {
    int position = position();

// int n2 = Math.min(n, capacity - position);
position(position + n);
return n;

incorrect pointer math in AbstractExcerpt.position()

i noticed that skipBytes() doesn't work correctly, it turned out to be a problem with position(). this code works:

@Override
public Excerpt position(int position) {
    if (position < 0 || position >= capacity()) throw new IndexOutOfBoundsException();
    this.position = start + position; // start has to be added
    return this;
}

Documentation ?

Peter,

I am a software architect, and currently trying to get a few developers who are rather unfamiliar with the whole concept of low-latency enthusiastic. Alas, in the moment there is only the code: no documentation at all, besides what one can pull as Javadoc from the code.

Is there anything material you have ( slide show, PDF, short explanation ) that I could use to teach my colleagues ?

Best Regards

readUnsignedShort return undignedbyte

In the #readUnsignedShort methods, we should use "readShort() & 0xFFFF" rather than "readShort() & 0xFF". I guess it's a typo. And sounds like

@OverRide
public void writeUnsignedShort(int offset, int v) {
writeUnsignedShort(offset, v);
}

should be
@OverRide
public void writeUnsignedShort(int offset, int v) {
writeShort(offset, v);
}

BTW, thanks for this great library.

Bug and/or unexpected behaviour of hasNextIndex()

I started playing with Chronicle library and trying to start from basic stuff, like walking through saved Chronicle. I encountered some problems with Excerpt.hasNextIndex() and Excerpt.nextIndex() methods. I believe there are two cases, one is bug and another is either me not understanding what hasNextIndex() should do.

  1. First is, I think, a bug. If you open IndexedChronicle and create excerpt on it, hasNextIndex() will return false, although nextIndex() will return true. If you change order, both will return true (since index was moed to 0). See test methods testHasNextIndexFail and testHasNextIndexPass in attached unit test
  2. Second problem comes when I try to iterate over Chronicle. Being used to Iterator semantics of hasNext() and next() methods, I would expect nextIndex() method to return true whenever hasNextIndex() returns true. But it seems that this is not the case with Excerpt. See testHasNextIndexIteration() method in unit test.

Unit test to reproduce bug and show chronicle iteration:

package com.binarybuffer.chronicle.test;

import com.higherfrequencytrading.chronicle.Chronicle;
import com.higherfrequencytrading.chronicle.Excerpt;
import com.higherfrequencytrading.chronicle.impl.IndexedChronicle;
import com.higherfrequencytrading.chronicle.tools.ChronicleTools;
import org.junit.Assert;
import org.junit.Before;
import org.junit.Test;

import java.io.File;
import java.io.IOException;
import static org.junit.Assert.*;
/**
 * Test correct behaviour of hasNextIndex() and nextIndex() methods
 */
public class ExcerptHasNextTest {
    static final String TMP = System.getProperty("java.io.tmpdir");
    public static final int NUMBER_OF_ENTRIES = 12;


    @Test
    public void testHasNextIndexFail() throws IOException {
        final Chronicle chr = createChronicle("hasNextFail");

        final Excerpt readExcerpt = chr.createExcerpt();
        assertTrue("Read excerpt should have next index", readExcerpt.hasNextIndex());
        assertTrue("It should be possible to move to next index", readExcerpt.nextIndex());
    }

    @Test
    public void testHasNextIndexPass() throws IOException {
        final Chronicle chr = createChronicle("hasNextPass");

        final Excerpt readExcerpt = chr.createExcerpt();
        assertTrue("It should be possible to move to next index", readExcerpt.nextIndex());
        assertTrue("Read excerpt should have next index", readExcerpt.hasNextIndex());
    }

    @Test
    public void testHasNextIndexIteration() throws IOException {
        final Chronicle chr = createChronicle("testIteration");

        final Excerpt readExcerpt = chr.createExcerpt();
        readExcerpt.index(0);

        while (readExcerpt.hasNextIndex()) {
            assertTrue("I would expect nextIndex() return true after hasNextIndex() returns true",
                    readExcerpt.nextIndex());
        }
    }

    private Chronicle createChronicle(String name) throws IOException {
        final String basePath = TMP + File.separator + name;
        final IndexedChronicle chr = new IndexedChronicle(basePath);

        ChronicleTools.deleteOnExit(basePath);

        final Excerpt excerpt = chr.createExcerpt();

        for (int i = 0; i < NUMBER_OF_ENTRIES; ++i) {
            excerpt.startExcerpt(128);
            excerpt.writeBytes("test");
            excerpt.finish();
        }

        assertTrue("Chronicle should hold all values", NUMBER_OF_ENTRIES == excerpt.size());

        return chr;
    }
}

Subsequent writes append null characters to previous record

I have not had a significant amount of time to look further into why this is happening, but if you run the following example I quickly put together together, you should see that one of the read messages will have extra bytes padding the end of it. This doesn't happen until after a subsequent write.

So, for example, if the below class tells you that the data at index 7 does not match what was expected, it only became that way after data at index 8 was written. If you were to write the first 8 byte arrays (instead of the 10 I am showing below), you will see that there are no errors. It isn't until the 9th one is written that it appends the null bytes to the 8th record.

It seems this pattern repeats every 8 records.

I'll see if I can provide more information when I get some more time.

Thanks.

package vanilla.java.chronicle.impl;


import org.junit.Test;
import vanilla.java.chronicle.Excerpt;

import java.io.IOException;
import java.util.ArrayList;
import java.util.Random;

public class WritePlacementTest {



    private IndexedChronicle chronicle;
    private Random random;
    private ArrayList<byte[]> msgs;

    public WritePlacementTest() throws IOException {
        chronicle = new IndexedChronicle(System.getProperty("java.io.tmpdir"), 12);
        chronicle.useUnsafe(false);
        chronicle.clear();

        random = new Random();
        random.setSeed(System.currentTimeMillis());

        msgs = new ArrayList<byte[]>();
    } // end constructor


    @Test
    public void WriteReadTest() throws Exception {

        int i;

        // make random byte arrays
        for(i = 0; i < 10; i++) {
            byte[] b= new byte[500];
            random.nextBytes(b);
            msgs.add(b);
        }

        Excerpt<IndexedChronicle> excerpt = chronicle.createExcerpt();

        // write msgs
        for(i = 0; i < msgs.size(); i++) {
            excerpt.startExcerpt(msgs.get(i).length);
            excerpt.write(msgs.get(i));
            excerpt.finish();
        }


        // read back msgs
        i = 0;
       while(excerpt.index(i) && i < msgs.size()) {
           byte[] read = new byte[excerpt.remaining()];

           excerpt.readFully(read);

           if(read.length != msgs.get(i).length) {
               System.out.printf("The lengths do not match! Index = %d\n", i);
               System.out.printf("\t[EXPECTED] %s\n", byteArrayToHexString(msgs.get(i)));
               System.out.printf("\t[READ]     %s\n", byteArrayToHexString(read));
           }

           i++;
       }
    }


    public static String byteArrayToHexString(byte[] b) {
        StringBuffer sb = new StringBuffer(b.length * 2);
        for (int i = 0; i < b.length; i++) {
            int v = b[i] & 0xff;
            if (v < 16) {
                sb.append('0');
            }
            sb.append(Integer.toHexString(v));
        }
        return sb.toString().toUpperCase();
    }

} // end class WritePlacementTest

Random error in InProcessChroncileTest under load.

testOverTCP(com.higherfrequencytrading.chronicle.impl.InProcessChronicleTest) Time elapsed: 0.07 sec <<< ERROR!
java.nio.BufferUnderflowException
at java.nio.Buffer.nextGetIndex(Buffer.java:478)
at java.nio.DirectByteBuffer.getLong(DirectByteBuffer.java:731)
at com.higherfrequencytrading.chronicle.tcp.InProcessChronicleSink.readNextExcerpt(InProcessChronicleSink.java:171)
at com.higherfrequencytrading.chronicle.tcp.InProcessChronicleSink.readNext(InProcessChronicleSink.java:117)
at com.higherfrequencytrading.chronicle.tcp.InProcessChronicleSink$SinkExcerpt.nextIndex(InProcessChronicleSink.java:96)
at com.higherfrequencytrading.chronicle.impl.InProcessChronicleTest.testOverTCP(InProcessChronicleTest.java:78)

Fine when run by itself.

Garbage collecting processed data.

Hello,

Currently I'm using your software to push roughly 400,000 integers, thew 9 different Excerpts gateways, which means about 45,000 nodes /excerpt. This system works as crossfire between read and write, for two different applications. One pushes the data, while one waits for there to be data, and extracts it from the excerpt. Basically the problem is with so much data being pushed and potentially Queued, there Excerpt heap gets pretty large, and needs to be cleared from what was read by the reading portion of the system(data that'll never be read or used again). So once data has been read from the excerpt gateway, how does it get deleted, or basically Garbage collected, without clearing potentially Queued data? I can't seem to find such a method.

Thank you for your time.

compilation warnings

not an issue per se but there are quite a few compilation warnings in chronicle code that can be sorted out.

File handles opened in IndexedChronicle constructor cannot be closed

I am having issues with being able to delete .data/.index files, due to file handles that remain open.

I believe the problem is that an IndexedChronicle is 'opened' during initialization (i.e. in the constructor). If the constructor throws an exception, there is no way to then close the chronicle and release the resources. Specifically, I have seen this in a scenario where the thread creating the chronicle is interrupted during construction, in which case a call to FileChannel.size() throws an exception while trying to find the last record. This is after the data/index file channels have been opened, and there seems to be no way to close them.

I would propose that either...

  1. the chronicle constructor should be more careful to close all resources in the case that an exception is raised

  2. the chronicle should have a separate 'open()' method, in which case 'close()' could be called after a failed opening of the chronicle

** some might suggest that this could be done in a finalize() method, but that is generally not recommended. (https://www.securecoding.cert.org/confluence/display/java/MET12-J.+Do+not+use+finalizers)

Inconsistent library versions reminder.

Hi. I have implemented a tool to detect library version inconsistencies. Your project have 1 inconsistent library and 3 false consistent libraries.

Take junit:junit for example, this library is declared as version 4.10 in testing, 4.11 in chronicle-fix and etc... Such version inconsistencies may cause unnecessary maintenance effort in the long run. For example, if two modules become inter-dependent, library version conflict may happen. It has already become a common issue and hinders development progress. Thus a version harmonization is necessary.

Provided we applied a version harmonization, I calculated the cost it may have to harmonize to all upper versions including an up-to-date one. The cost refers to POM config changes and API invocation changes. Take junit:junit for example, if we harmonize all the library versions into 4.13-beta-2. The concern is, how much should the project code adapt to the newer library version. We list an effort table to quantify the harmonization cost.

The effort table is listed below. It shows the overall harmonization effort by modules. The columns represents the number of library APIs and API calls(NA,NAC), deleted APIs and API calls(NDA,NDAC) as well as modified API and API calls(NMA,NMAC). Modified APIs refers to those APIs whose call graph is not the same as previous version. Take the first row for example, if upgrading the library into version 4.13-beta-2. Given that 2 APIs is used in module chronicle-fix, 0 of them is deleted in a recommended version(which will throw a NoMethodFoundError unless re-compiling the project), 2 of them is regarded as modified which could break the former API contract.

Index Module NA(NAC) NDA(NDAC) NMA(NMAC)
1 chronicle-fix 2(4) 0(0) 2(4)
2 testing 2(2) 0(0) 2(2)

Also we provided another table to show the potential files that may be affected due to library API change, which could help to spot the concerned API usage and rerun the test cases. The table is listed below.

Module File Type API
testing testing/src/main/java/com/higherfrequencytrading/chronicle/perf/PackedHashedTableTest.java modify org.junit.Assert.assertTrue(boolean)
testing testing/src/main/java/com/higherfrequencytrading/chronicle/tools/ObjectStreamTest.java modify junit.framework.Assert.assertEquals(java.lang.Object,java.lang.Object)
chronicle-fix chronicle-fix/src/test/java/com/higherfrequencytrading/chronicle/fix/FixSocketReaderTest.java modify org.junit.Assert.assertTrue(boolean)
4 .. .. ..

As for false consistency, take com.higherfrequencytrading chronicle jar for example. The library is declared in version 1.9.2-SNAPSHOT in all modules. However they are declared differently. As components are developed in parallel, if one single library version is updated, which could become inconsistent as mentioned above, may cause above-mentioned inconsistency issues

If you are interested, you can have a more complete and detailed report in the attached PDF file.
Uploading peter-lawrey Java-Chronicle.pdf…

IndexedChronicle.createDataBuffer() ignores specified ByteOrder

The constructor for IndexedChronicle accepts a ByteOrder, which is used in createIndexBuffer(). However, it is not used in createDataBuffer() and the native byte ordering is always used instead:

line 246: mbb.order(ByteOrder.nativeOrder());

It would seem as though the purpose for specifying a byte ordering would be more for the data buffer, such that methods like writeInt() and writeLong() could be called on the writing excerpt and then read( byte[] ) could be used on the reading excerpt with an expected byte ordering.

version: 1.7.2

writeBoolean() / readBoolean() flips the value

Using the writeBoolean() and readBoolean() helper methods to transfer a boolean value makes the actual value to flip in the process.


sample JUnit test:

@Test
public void test_boolean() throws Exception {
    String testPath = "temp" + File.separator + "chroncle-bool-test";
    IndexedChronicle tsc = new IndexedChronicle(testPath, 12);
    tsc.useUnsafe(false);
    deleteOnExit(testPath);

    tsc.clear();

    Excerpt<IndexedChronicle> excerpt = tsc.createExcerpt();
    excerpt.startExcerpt(2);
    excerpt.writeBoolean(false);
    excerpt.writeBoolean(true);
    excerpt.finish();

    excerpt.index(0);
    boolean one = excerpt.readBoolean();
    boolean onetwo = excerpt.readBoolean();
    tsc.close();

    Assert.assertEquals(false, one);
    Assert.assertEquals(true, onetwo);
}

Character-based read/write methods of Excerpt are not consistent with DataInput/DataOutput interface

If data is written/read from Excerpt in the exact same manner (e.g. writeUTF()... readUTF()), everything seems to work fine. However, if the writeUTF/Bytes/Chars() methods are used to write data and then a more general read(byte[]) is used to read the data, it cannot be properly 'decoded' using another DataInput. This is because AbstractExcerpt does not seem to adhere to the documentated behavior of DataOutput, as follows:

writeBytes() -- writes an extra length byte, which should not be included
writeChars() -- writes an extra 2 length bytes, which should not be included
writeUTF() -- only writes one length byte, instead of two

It seems to me that if Excerpt is to extend DataInput and DataOutput, it's implementation should be consistent with the documented behavior of these methods.

Java chronicle not compiling

I get the following errors when I try to compile.

[INFO] Compiling 20 source files to C:\git\Java-Chronicle\target\classes
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1.772s
[INFO] Finished at: Thu Apr 19 22:47:37 IST 2012
[INFO] Final Memory: 5M/11M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:2.
0.2:compile (default-compile) on project chronicle: Compilation failure: Compila
tion failure:
[ERROR] C:\git\Java-Chronicle\src\main\java\vanilla\java\chronicle\impl\IndexedC
hronicle.java:[19,17] sun.nio.ch.DirectBuffer is Sun proprietary API and may be
removed in a future release
[ERROR]
[ERROR] C:\git\Java-Chronicle\src\main\java\vanilla\java\chronicle\impl\UnsafeEx
cerpt.java:[19,15] sun.misc.Unsafe is Sun proprietary API and may be removed in
a future release
[ERROR]
[ERROR] C:\git\Java-Chronicle\src\main\java\vanilla\java\chronicle\impl\UnsafeEx
cerpt.java:[20,17] sun.nio.ch.DirectBuffer is Sun proprietary API and may be rem
oved in a future release
[ERROR]
[ERROR] C:\git\Java-Chronicle\src\main\java\vanilla\java\chronicle\impl\UnsafeEx
cerpt.java:[226,25] sun.misc.Unsafe is Sun proprietary API and may be removed in
a future release
[ERROR]
[ERROR] C:\git\Java-Chronicle\src\main\java\vanilla\java\chronicle\impl\IndexedC
hronicle.java:[220,38] sun.nio.ch.DirectBuffer is Sun proprietary API and may be
removed in a future release
[ERROR]
[ERROR] C:\git\Java-Chronicle\src\main\java\vanilla\java\chronicle\impl\IndexedC
hronicle.java:[221,22] sun.nio.ch.DirectBuffer is Sun proprietary API and may be
removed in a future release
[ERROR]
[ERROR] C:\git\Java-Chronicle\src\main\java\vanilla\java\chronicle\impl\UnsafeEx
cerpt.java:[38,25] sun.nio.ch.DirectBuffer is Sun proprietary API and may be rem
oved in a future release
[ERROR]
[ERROR] C:\git\Java-Chronicle\src\main\java\vanilla\java\chronicle\impl\UnsafeEx
cerpt.java:[59,14] copyMemory(long,long,long) in sun.misc.Unsafe cannot be appli
ed to (,long,byte[],int,int)
[ERROR]
[ERROR] C:\git\Java-Chronicle\src\main\java\vanilla\java\chronicle\impl\UnsafeEx
cerpt.java:[147,14] copyMemory(long,long,long) in sun.misc.Unsafe cannot be appl
ied to (byte[],int,,long,int)
[ERROR]
[ERROR] C:\git\Java-Chronicle\src\main\java\vanilla\java\chronicle\impl\UnsafeEx
cerpt.java:[153,14] copyMemory(long,long,long) in sun.misc.Unsafe cannot be appl
ied to (byte[],int,,long,int)
[ERROR]
[ERROR] C:\git\Java-Chronicle\src\main\java\vanilla\java\chronicle\impl\UnsafeEx
cerpt.java:[231,30] sun.misc.Unsafe is Sun proprietary API and may be removed in
a future release
[ERROR]
[ERROR] C:\git\Java-Chronicle\src\main\java\vanilla\java\chronicle\impl\UnsafeEx
cerpt.java:[233,22] sun.misc.Unsafe is Sun proprietary API and may be removed in
a future release
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e swit
ch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please rea
d the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureExc
eption
C:\git\Java-Chronicle>java -version
java version "1.6.0_20"
Java(TM) SE Runtime Environment (build 1.6.0_20-b02)
Java HotSpot(TM) 64-Bit Server VM (build 16.3-b01, mixed mode)

Holding buffers in list

Hi -

Does the chronicle hold all mapped buffers in memory until the chronicle gets closed, but otherwise keeps references to them around all the time?

I'm referring specifically to these Lists.

    private final List<MappedByteBuffer> indexBuffers = new ArrayList<MappedByteBuffer>();
    private final List<MappedByteBuffer> dataBuffers = new ArrayList<MappedByteBuffer>();

The reason I ask is because I'm curious if this could create potential problems if a very large file is being read with, for example, a small configured block size. This would result is a large amount of buffers being allocated. If the underlying OS has an upper limit for the number of these that can be allocated, it would result in an "out of memory" exception once this limit was reached.

My question is, by holding onto these buffers in the list, does this prevent the OS from cleaning up these allocated maps in memory?

Thanks for the help!

Idea: Work Queue Semantics

Hi Peter,

This is a great project. I was wondering if you have any plans for adding "Work Queue" semantics to this, a la beanstalkd and libkestrel. I've been looking for a robust in-process work queue for Java for a while now and I couldn't find one. The former is a C daemon and the latter is a semi-abandoned Scala library.

-- Drew

Is there a bug in "vanilla.java.chronicle.ByteString"?

vanilla.java.chronicle.ByteString
public void append(byte b) {
int len = length();
if (len >= 255)
throw new IndexOutOfBoundsException("Cannot append len=" + len);
data[len] = b;
data[0]++;
}

I guess "data[len]" should be "date[len + 1] = b";

Cannot remove files created by IndexedChronicle on Windows 7 (32-bit JVMs only)

On my Windows 7 box I have run into an issue with Chronicle 1.6 where I cannot remove the data/index files created by an IndexedChronicle even after I have closed it. I only get this issue on 32-bit versions of Java 6 or Java 7. 64-bit versions are fine. I have also tested on Linux and do not get any problems there. I realise that it's probably more a JVM then a code issue but, oddly, everything works fine with an earlier version of Chronicle (1.3, I believe).

When I look at the JVM process in procexp on Windows I can see that the process no longer has file handles for the data/index files after calling close(). On 32-bit JVMs it does, however, have DLL handles to them. Weird.

The following code below reproduces the problem.

import java.io.File;

import com.higherfrequencytrading.chronicle.Excerpt;
import com.higherfrequencytrading.chronicle.impl.IndexedChronicle;


public class TestChronicleFileRemove {

    public static void main(String[] args) throws Exception {
        String basePath = args[0];
        boolean useUnsafe = args.length > 1 ? Boolean.parseBoolean(args[1]) : false;
        IndexedChronicle chronicle = new IndexedChronicle(args[0], 16);
        chronicle.useUnsafe(useUnsafe);
        Excerpt excerpt = chronicle.createExcerpt();
        excerpt.startExcerpt(32);
        excerpt.writeChars("Hello");
        excerpt.finish();
        excerpt.close();
        chronicle.close();

        removeFile(basePath + ".data");
        removeFile(basePath + ".index");

    }

    private static void removeFile(String path) {
        if (new File(path).delete()) {
            System.out.println("removed: " + path);
        } else {
            System.out.println("could not remove: " + path);
        }
    }

}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.