Giter VIP home page Giter VIP logo

lightning.net's People

Contributors

adamfur avatar algorithmsarecool avatar configurator avatar coreykaylor avatar danieladolfsson avatar decoyfish avatar gitter-badger avatar hey-red avatar hyc avatar ilyalukyanov avatar jakoss avatar jasonpunyon avatar jordanzaerr avatar jorgenws avatar kurtschelfthout avatar ok-ul-ch avatar pepelev avatar sebastienros avatar skalinets avatar ubercellogeek avatar valeriob avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lightning.net's Issues

Problem with large MapSize (lmdb compiled with MinGW)

It's really strange, but original binaries (lmdb64.dll) from NuGet package cannot open environment with MapSize > ~20Mb. mdb_env_open returns 0x08 (ERROR_NOT_ENOUGH_MEMORY). I check ruby wrapper for lmdb under windows, it cannot do it too: "Exec format error". There is no problem on linux.

So, I get your VS solution for liblmdb (mdb sources were updated from original repository), build it and it works fine, no problems with MapSize on both machines.

Can you confirm this strange things?

Accessing database from read-only transaction

First of all - this is not a bug. Let's assume we have LMDB directory with "tab" database already created by the previous execution of our process. According to LMDB spec the following use case is not valid:

        LightningDB.LightningEnvironment env = new LightningEnvironment(@"c:\8\");
        env.Open();

        using (var transaction = env.BeginTransaction(TransactionBeginFlags.ReadOnly))
        {
            var database = transaction.OpenDatabase("tab");
            string buf;
            transaction.TryGet(database, 2, out buf);  //First time - everything is OK
        }

        using (var transaction = env.BeginTransaction(TransactionBeginFlags.ReadOnly))
        {
            var database = transaction.OpenDatabase("tab");
            string buf;
            transaction.TryGet(database, 2, out buf);  //Second time - invalid parameter exception
        }

LMDB says that database handles should be reused which is what Lightning.NET is doing flawlessly but it also states that the first call for database handle should be performed from writeable transaction. If the aforementioned code will first open database from an empty writeable transaction, everything would be fine. Saying all that it took me a whole day tracking down this problem.

Question: can we somehow protect the user from shooting herself in the foot for instance by raising a descriptive exception when the first OpenDatabase call to obtain a handle is made from ReadOnly transaction?

Automatic cursor release upon transaction disposal causes memory corruption

When cursor is disposed explicitly in write transaction, everything is fine as long as it is disposed before Commit or Rollback. But if a cursor is left for "automatic" collection, transaction Dispose will try to close it after Commit/Rollback which cause memory errors. The errors are quite elusive (I managed to reproduced them with NOTLS flag and they happen after a few minutes of releasing cursors after write transaction has ended).

According to LMDB spec of mdb_cursor_close:

The cursor handle will be freed and must not be used again after this call. Its transaction must still be live if it is a write-transaction.

Can we have a descriptive exception in Cursor.Dispose if it is called when transaction is already closed?

Cross platform build script

I would like to setup a build script preferably using rake so that cross platform compilation and testing is as easy as possible. I'm happy to do this. Thoughts?

Proposal: Additional Unsafe nuget LightningDBFast

For those that can run in full-trust. We can squeeze out every ounce of performance by avoiding Marshal.Copy altogether. I have spiked out a simple example off the dnx branch that would create a new assembly / nuget and allow us to segment unsafe and safe code without creating too much of a maintenance burden through a combination of partial classes and wildcard patterns in project.json. In one instance it will include all *.cs + *.Safe.cs - *.Unsafe.cs, and the faster dll would include *.cs + *.Unsafe.cs - *.Safe.cs. Normally this would have made me puke to think about, but in the new dnx world this is very trivial to support.

Hopefully everything I've mentioned makes sense?

Unable to load lmdb64.dll in web apps

I got an error in a web application complaining the wrappers could not load the 64bit dll, which I was able to remedy by fiddling around with the permissions on the files...

Do you know if there is a reason this was the case (increased security requirements in web applications?)

I installed via nuget.

Using `EnvironmentConfiguration` in `LightningEnvironement` does not work

This does not work:

this.environment = new LightningEnvironment(lmdbCacheDir, new EnvironmentConfiguration { MaxDatabases = 2 });
this.environment.Open();
using (var t = this.environment.BeginTransaction()) {
    this.dataDb = t.OpenDatabase(DataDatabaseName, new DatabaseConfiguration { Flags = DatabaseOpenFlags.Create });
    this.indexDb = t.OpenDatabase(IndexDatabaseName, new DatabaseConfiguration { Flags = DatabaseOpenFlags.Create });
    t.Commit();
}

but this does:

this.environment = new LightningEnvironment(lmdbCacheDir);
this.environment.MaxDatabases = 2;
this.environment.Open();
using (var t = this.environment.BeginTransaction()) {
    this.dataDb = t.OpenDatabase(DataDatabaseName, new DatabaseConfiguration { Flags = DatabaseOpenFlags.Create });
    this.indexDb = t.OpenDatabase(IndexDatabaseName, new DatabaseConfiguration { Flags = DatabaseOpenFlags.Create });
    t.Commit();
}

the only difference being how the MaxDatabases is set: through config to the environment or after on a property of the environment. One would think they would work the same.

Doc for Cursor.Put is out of date, could be misleading; excessive marshalling

Compare this line

with the current docs:

This function stores key/data pairs into the database. The cursor is positioned at the new item, or on failure usually near it. Note: Earlier documentation incorrectly said errors would leave the state of the cursor unchanged.

In addition, currently there is no way to move cursor without marshalling both keys and values. There are valid use cases when one would want to skip value marshalling, e.g. LINQ-like First/Where(predicate) search or accessing lagged items with N MovePrev (here even key marshalling is not needed if we want a lagged value V(i-n) for a key K(i)).

The same is true for collections, since they just use the same cursor.Get() where marshalling happens.

Enumerator.MoveNext uses MoveNext, so even with .Skip(N) all N values are marshalled.


public static CursorGetByOperation MoveNextBy(this Cursor cur)
        {
            return CursorMoveBy(cur, cur.MoveNext);
        }

Lazy marshalling could be useful because it is more or less expensive operation, especially for non-trivial keys/values.

There is one thing to keep in mind with lazy marshalling, but currently it is irrelevant because marshalling always happens after any move:

Values returned from the database are valid only until a subsequent update operation, or the end of the transaction.

An example in py-lmdb.

Open transaction as generic.

Maybe implement an generic version of the BeginTransaction and attach the the string, or int key provider to the trans.

Would make it easier to work with..

Problems with attached debugger (64-bit LMDB only)

http://pastebin.com/XW5ej3pa - this (and not only this) example frequently crashed (on ~300k iteration) on two machines, when debugger is attached. Some examples crashes (on both machines too) earlier.

Tested with VS2013, lmdb binaries from NuGet package and binaries builded from VS. Without debugger - all is ok.

Can you confirm this problem?

I'm sorry for my 'MGIMO finished' English.

Why is not Put method in Transaction?

It is not logical that Put is on Database object. You don't want to create new database when you create new transaction you want to reuse existing Database object.

Or can you create slightly bigger sample how multiple transactions to same database could be done. Some of them should be writing some of them read-only.

I tried workaround with cursor (but it crashes on access violation):

        var env = new LightningEnvironment("db", EnvironmentOpenFlags.None);
        env.Open();
        var txn = env.BeginTransaction();
        var db = txn.OpenDatabase(null, DatabaseOpenFlags.None);
        txn.Commit();
        txn.Dispose();

        var rnd = new Random(1234);
        for (int j = 0; j < 1000; j++)
        {
            txn = env.BeginTransaction();
            var cursor = new LightningCursor(db, txn);
            for (int i = 0; i < 1000; i++)
            {
                var key = new byte[rnd.Next(10, 50)];
                rnd.NextBytes(key);
                var value = new byte[rnd.Next(50, 500)];
                rnd.NextBytes(value);
                cursor.Put(key, value, PutOptions.None);
            }
            cursor.Close();
            cursor.Dispose();
            txn.Commit();
            txn.Dispose();
        }
        db.Close();
        db.Dispose();
        env.Close();
        env.Dispose();

Default Converters are not "symmetrical"

Is there a reason why converters predefined in DefaultConverters are not "paired"?
There is plenty ConvertFromBytes instances but only few ConvertToBytes. For instance Int64 or byte[] converters need to be added explicitly if someone wants to use them.

Small example

First of all, i'm sorry for my primitive English. Can you wrote a small example of Lightning.NET using? And maybe best practices. It's really difficult to understand .NET wrapper now. I looked at Java/Ruby wrappers - they are more understandable.

My task is a highload cache (single thread now), up to 32 Gb. I tried a classic way: create an environment, set an environment settings, create transaction, open database, than start to put my data via transaction (key: long, value: double[2]). Everytime I've got an error somewhere in the lmdb64.dll on ~100k iteration.

Thank you for your work, but we need a little bit more information about using.

Spanning new transactions and closing them is not thread safe

It looks like BeginTransaction and Commit, Abort as well as OpenDatabase are not thread safe. They are probably safe on LMDB side but c# support classes like TransactionManager uses HashSet not wrapped with any lock. They fail when multiple readers are being created and destroyed on different threads in a short period of time.

Test fails with provided binaries

On my machine CursorShouldDeleteElements fails when I clone the repo and run.

It deletes key1 and key3, not key2. I have updated source to 0.9.14 and recompiled the binaries using your lightningdb-win project and the test started to pass.

Database growing

I think that it'll be a good idea to return a value from transaction's Put method (may be from some others too) to check MDB_MAP_FULL and resize db when it's needed. Or do it directly in the wrapper (as option, ofc). What do you think about?

I'm sorry for my primitive English.

Release Nuget Package

It would be preferable to consume this project through a nuget package. I can setup the automation for creating (manual for publish) the nuget package if the rake build script is agreeable to the project.

Providers should change to be more extensible

As it is currently the providers are a bit clumsy relying on static state / extension methods (in a separate namespace). The IL emitting is overly complex and probably unnecessary given what it's doing. I'm spiking an alternative for review including tests. This should also be inline with the future goals of the project to integrate with custom serializers.

This will ultimately boil down to registering converters that deal with something similar to Func<byte[], TTargetType>.

Can't commit transaction when using named database

Whenever I try to use a named database, my write transaction never completes. The test runner just spins and eventually aborts. Here's a passing and failing test:

[Test]
[TestCase(null)] // works
[TestCase("test")] // fails
public void CanPutAndGetData(string dbName)
{
    var path=SetupDbFolder();

    using (var env = new LightningEnvironment(path))
    {
        env.MaxDatabases = 2;

        env.Open();

        using (var tran = env.BeginTransaction())
        {
            using (var db = tran.OpenDatabase(dbName, DatabaseOpenFlags.Create))
            {
                tran.Put(db, "1", "Ronnie");
            }

            // when using a named database
            // i never get past here
            // there's no exception
            // the test takes some time to abort
            // so I believe it may be a stack overflow
            tran.Commit();
        }

        using (var tran = env.BeginTransaction())
        {
            using (var db = tran.OpenDatabase(dbName))
            {
                var value = tran.Get<string, string>(db, "1");
                Assert.AreEqual("Ronnie", value);
            }

            tran.Commit();
        }

        env.Close();
    }
}

private string SetupDbFolder()
{
    var desktop = Environment.GetFolderPath(Environment.SpecialFolder.Desktop);
    var path = Path.Combine(desktop, "lmdb tests");
    if (Directory.Exists(path))
        Directory.Delete(path, true);

    return path;
}

cur.MoveToFirstDuplicate() throws

It looks like this native call doesn't return a key-value pair but only positions the cursor. (or I am calling this in a wrong way)

When I add operation check in the middle of the following methods, it works as expected, but without it
keyStruct.ToByteArray(res) throws because the returned ValueStructures are empty.

private KeyValuePair<byte[], byte[]>? Get(CursorOperation operation, ValueStructure? key = null, ValueStructure? value = null)
        {
            var keyStruct = key.GetValueOrDefault();
            var valueStruct = value.GetValueOrDefault();

            var res = NativeMethods.Read(lib => lib.mdb_cursor_get(_handle, ref keyStruct, ref valueStruct, operation));

             // HERE need to check and provide current values
            if (operation == CursorOperation.FirstDuplicate) {
                res = NativeMethods.Read(lib => lib.mdb_cursor_get(_handle, ref keyStruct, ref valueStruct, CursorOperation.GetCurrent));
            }

            return res == NativeMethods.MDB_NOTFOUND
                ? (KeyValuePair<byte[], byte[]>?) null
                : new KeyValuePair<byte[], byte[]>(keyStruct.ToByteArray(res), valueStruct.ToByteArray(res));
        }

The whole test below. Note the expected behavior right before the last loop:

[Test]
        public void CursorShouldPutDupValues() {

            var db2 = _txn.OpenDatabase("dup",
                DBFlags.Create | DBFlags.DuplicatesSort
                | DBFlags.DuplicatesFixed | DBFlags.IntegerDuplicates);

            using (var cur = _txn.CreateCursor(db2)) {
                var keys = Enumerable.Range(1, 50000).ToArray();
                foreach (var k in keys) {
                    cur.Put(Encoding.UTF8.GetBytes("key"), k, CursorPutOptions.None);
                }
                // overwrite
                foreach (var k in keys) {
                    cur.Put(Encoding.UTF8.GetBytes("key"), k, CursorPutOptions.None);
                }

                var kvp = cur.MoveToFirst();
                kvp = cur.MoveNextDuplicate();
                kvp = cur.MoveNextDuplicate();
                kvp = cur.MoveToFirstDuplicate(); // cancel the moves above and start from the beginning
                foreach (var k in keys) {
                    //var kvp = cur.GetCurrent();
                    Assert.AreEqual(k, BitConverter.ToInt32(kvp.Value.Value, 0));
                    kvp = cur.MoveNextDuplicate();
                }
            }
        }

Ripple-cli gem and building on Mac

While trying to build on Mac I've faced a problem with ripple-cli which is worked around as described here I don't like this solution as it's a hack. Maybe there is more straight way to do this?

How to limit RAM usage by the LMDB

I'm trying to build quite big database in LMDB (about 560 M keys in one database, we got 3 databases). Also - while building we need to read some already saved data. So append mode can't be use in our case. The problem is - in about 30% of build progress our RAM usage hit the limit, screen shows the issue:

Screen of issue

So my question is - can i limit size of memory that LMDB can use? This database is only test one - for 70k "datasets". On production we will have to store about 3,4 M datasets. Every dataset needs to store 8k keys.

We are looking for SSD-optimized solution, with lowest RAM usage as possible

Usability - BeginTransaction() before Open() causes an access violation

If LightningEnvironment.BeginTransaction() is called before Open() a crash results.

            var env = new LightningEnvironment("C:\\Somewhere");
            env.MapSize = 12345;
            env.BeginTransaction();

I know there are probably many places where the library or LMDB might behave this way, but since this one is "right at the start" and may discourage new users it seems like checking Environment.IsOpen in BeginTransaction() and throwing appropriately would be worthwhile.

Use git submodule for lmdb dependency

For discussion: It would be nice if the dll wasn't precompiled and committed and rather part of a build script that pulls in the .dll or .so file compiled from the submodule. This will make it easier to pull in and react to changes from the upstream lmdb project. I am happy to cook something up for this. My approach would be to use MinGW and a separate Makefile for the Windows specific dll.

NuGet package does not contain xml comments

Apparently the XML comment file was not generated when compiling for NuGet, meaning the library is undocumented in VS unless you link directly to the source. It's a shame, too, because the source code is very well documented.

VS Express 2010 - build/run problems

I just downloaded the current release and tried to build using VS Express 2010. I had to remove a reference to "CommonAssembly.cs:in the VS solution to make it build. Once I did it built ok. I created a test project to run a few unit tests.

My test project fails with a "dll load error" saying it can't find lmdb32. I have taken this file from the Binary directory and added it to all projects - but still no luck,

I am running on windows8.

Any suggestions ?

Question on locking and read transaction disposal

May I ask you a couple of questions that are not directly about the current implementation, but depend on the library elements? You know .NET and LMBD better than me and could spot some flaws as before.

First, about writes.
I want to use the lib with async without worrying about tasks that switch OS threads, so I wrap all write transactions in an async task like this:

        /// <summary>
        /// Performs a write trasaction asynchronously
        /// </summary>
        internal static async Task<T> WriteAsync<T>(this DbEnvironment env, Func<Transaction, Task<T>> writeTask) {
            var tcs = new TaskCompletionSource<object>();
            if (!env.WriteQueue.IsAddingCompleted) {
                Func<Transaction, Task<object>> job = async (x) => await writeTask(x);
                var tuple = Tuple.Create(tcs, job);
                env.WriteQueue.Add(tuple);
            } else {
                tcs.TrySetCanceled();
            }

            var res = await tcs.Task;
            return (T)res;
        }

Where env.WriteQueue is a concurrent queue that is consumed on a separate long-running Task, which executes the write task and sets a result to a TaskCompletionSource. As we discussed in the latest comments of #32, my the previous solution (create transactions async) wasn't working, but with TCS all write transactions are concurrent from C# point of view and serialized on a separate thread (and there is still a named lock for different processes).

            // --- THIS IS INSIDE ENV CONSTRUCTOR ---
            // Writer Task
            // In the current process writes are serialized via the blocking queue
            // Accross processes, writes are synchronized via WriteTxnGate
            _cts = new CancellationTokenSource();
            _writeTask = Task.Factory.StartNew(() => {
                while (!WriteQueue.IsCompleted) {
                    // BLOCKING
                    try {
                        var tuple = WriteQueue.Take(_cts.Token);
                        var tcs = tuple.Item1;
                        var job = tuple.Item2;
                        try {
                            var txn = BeginTransaction(TransactionFlags.ReadWrite);
                            var result = job(txn).Result;
                            // Autocommit, if forgot to commit
                            if (txn.State == TransactionState.Active) {
                                txn.Commit();
                            }
                            tcs.SetResult(result);
                        }
                        catch (Exception e) {
                            tcs.SetException(e);
                        }
                    }
                    catch (InvalidOperationException e) {

                    }
                }
            }, _cts.Token, TaskCreationOptions.LongRunning, TaskScheduler.Default);

I believe this is quite similar what LMDB does internally, and all the write locking could be done on .NET side as well. Assuming that I am going to use only this async wrapper for all transactions, do you see any problems with parallelism here?

Second, about read transactions
I am reading this thread and it describes a pretty common (and my major) use case, when many readers should frequently read what writers add to DB. My microbenchmarks show that LMDB could be as fast as some immutable in-memory data-structures like F# map within a single transaction, and eliminating a new transaction allocation for each query could make random access as fast and will allow to read DB updates very quickly.

The solution there is to create many read-only transactions and never commit them, but instead reset and renew them and store in a pool or a thread-local storage.

Howard @hyc writes:

In the actual LMDB API read transactions can be reused by their creating thread, so they are zero-cost after the first time. I don't know if any of the other language wrappers leverage this fact.

(Cursive is mine). Assuming again that I want to wrap all reads in a similar fashion that writes above, a pool is not a good thing because .NET Tasks could run on different threads other than the one where a transaction was created. If Go threads are similar to C# Tasks, Howard then kind of confirms this:

It is unfortunate that you're using a system like Go that multiplexes on top of OS threads. Your pool is going to require locks to manage access, and will be a bottleneck. In a conventional threaded program, thread-local storage can be accessed for free.

Before I saw that mail list thread, I had a naive implementation:

        /// <summary>
        /// Perform a a read transaction asynchronously
        /// </summary>
        internal static async Task<T> ReadAsync<T>(this DbEnvironment env, Func<Transaction, Task<T>> readTask) {
            var txn = await env.BeginTransactionAsync(TransactionFlags.ReadOnly);
            var res = await readTask(txn);
            // Autocommit, if forgot to commit
            if (txn.State == TransactionState.Active) {
                txn.Commit();
            }
            return res;
        }

Instead, inside env:

internal ThreadLocal<Transaction> TlTxn;
... in constructor:
TlTxn = new ThreadLocal<Transaction>(
                () => {
                    var txn = BeginTransaction(TransactionFlags.ReadOnly);
                    txn.Reset(); // further on, we always renew it, not begin
                    return txn;
                }, true); // track to dispose all of them later

Then the initial naive method becomes:

        /// <summary>
        /// Perform a a read transaction asynchronously
        /// </summary>
        internal static async Task<T> ReadAsync<T>(this DbEnvironment env, Func<Transaction, T> readTask) {
            return await Task.Run(() => {
                var txn = env.TlTxn.Value;
                if (txn != null && txn.State == TransactionState.Reset) {
                    // this is expected state
                    txn.Renew();
                }
                else {
                    if (txn != null) env.TlTxn.Value.Dispose();
                    txn = env.BeginTransaction(TransactionFlags.ReadOnly);
                    env.TlTxn.Value = txn;
                }
                // inside this task only a single thread touches txn unless readTask shares it with others
                // because readTask is not async, the thread of the outer task doesn't switch (doesn't await anything)
                var res = readTask(txn);
                // Autoreset
                if (txn.State == TransactionState.Active) {
                    txn.Reset();
                }
                return res;
            });
        }

That works on simple tests, but does anything catch your eyes? Particularly, I worry about disposal of Transactions inside ThreadLocal<> object when in the long-running app threads come and go to/from a threadpool. Also, I wanted to keed readTask returning a Task and to await on it, but then I thought that the awaiting thread could switch to another task and access the same thread-local transaction from another ReadAsync call while awaiting for the readTask (or after awaiting, if await returns to another OS thread). The second quote is probably about that, but a lock here could deadlock or it is just too complex to reason about it - ThreadLocals with async lamdas are tricky and I am not sure I grasp the subtleties, readTask as Func<Transaction, T> is enough.

Finally, do you think that this is enough to switch off LMDB locks completely with MDB_NOLOCK option? It is not a goal per se, but I want C# version to be as thread-safe as the native one.

Getting "MDB_MAP_FULL: Environment mapsize limit reached" while writing new data

It seems like we are adding way too much data (JSON documents - metadata) asynchronously and we don't give LMDB a chance to grow its mapsize.

Our write pattern isn't stable; our system (Zet Universe) writes a lot when new data is being added and processed by a variety of data processors, but then it remains silent, ready for its user to look at data.

What could be done to avoid this error?

Thanks,
Daniel

Nuget package needs updating

started investigating why i was getting MDB_MAP_FULL: Environment mapsize limit reached, and when I looked at the repo's lightningenviroment its different then what came from nuget ;-)

KeyExist

    /// <summary>
    /// Returns true if the key is found in the DB
    /// </summary>
    /// <param name="db"></param>
    /// <param name="key"></param>
    /// <returns></returns>
    public Boolean KeyExists(LightningDatabase db, TKey key)
    {
        var keyBytes = this.GetKeyBytes(db, key);
        return !(db.Get(keyBytes) == null);
    }

MapSize for Lightning Environment should be Unsigned Long instead of Long

The current version of the Lightning.net seems to have exactly the same issue like for example

Venemo/node-lmdb#13

The following code produces an AccessViolationException after running for a while even though the map size should be sufficient. After changing the parameter to ulong / uintptr the access violation exception was gone.

class Program
{
private static void Main(string[] args)
{

        var path = @"c:\data\lightning";

        if (Directory.Exists(path))
            Directory.Delete(path, true);

        if (!Directory.Exists(path))
            Directory.CreateDirectory(path);



        var env = new LightningEnvironment(@"c:\data\lightning", EnvironmentOpenFlags.WriteMap)
        {
          MapSize = 10L *1024*1024*1024
        };



        env.Open();
        var tx = env.BeginTransaction();

        var db = tx.OpenDatabase();
        tx.Commit();
        tx.Dispose();

        var stopWatch = new Stopwatch();
        stopWatch.Start();
        var key = 0L;
        for (long j = 1L; j < 10000L; j++)
        {

            var t = env.BeginTransaction();
            var cursor = new LightningCursor(db, t);
            Console.WriteLine("Writting chunk {0}", j);
            for (var i = 1L; i < 1000L; i++)
            {
                cursor.Put(Encoding.UTF8.GetBytes(key.ToString("0000000000000000")), Encoding.UTF8.GetBytes(key.ToString("0000000000000000")),
                    LightningDB.PutOptions.AppendData);

                key++;
            }
            cursor.Close();
            cursor.Dispose();
            t.Commit();
            t.Dispose();
        }
        stopWatch.Stop();
        Console.WriteLine("used milliseconds " + stopWatch.ElapsedMilliseconds);

        db.Close();
        env.Close();
    }
}

mdb_set_compare is missing

This is very useful. E.g. I want to map 1-to-1 SortedList<K,V> to a named DB to make it persistent. And I have a struct that has custom comparison (not a string and not an integer) and I use it as a key very often.

DllNotFoundException version 0.9.2.40

I created a library project that use LightningDB, create a new web app project (asp.net classic), invoking the library function it throws that exceptions saying "Additional information: C:\blablabla\ProjectWeb\lmdb32.dll" while the Lmdb32.dll is in C:\blablabla\ProjectWeb\bin\lmdb32.dll

Update for new dnx runtimes

After a long hiatus of inactivity it's finally a priority for me to get this working for our needs again. I would like to get it all running on dnx and have an idea of what it will take to get there.

Dnx does have a new project system that simplifies the project structure, but would require change. It would allow us to target "dnxcore50", "dnx451", "net45", "net40" very easily though.

The dnx runtime doesn't guarantee the same behavior that mono does regarding alternate file extension searching for DllImport, but there is another approach to dynamically assign delegates to the function pointers in the native libs. This is how the new kestrel web server is doing things under the covers.

The good news is that the entire nuget experience is significantly improved on dnx and it can remove a lot of the complexities being dealt with today through the rakefile, ripple, etc.

I'm also thinking that the payoff in the end of embedding the submodule for lmdb isn't making it much easier for as often as things seem to be updated. I'm wondering if including the dll, dylib, etc. (the way you had it to begin with) is going to be easier in the long run to simplify the approachability of the project.

Also, I can setup a travis-ci, and appveyor CI build to verify Windows and Linux behavior is working. I might also suggest that each CI produces a nuget and publishes to a myget feed. Then there is a very easy step to publish to the public nuget feeds on demand from myget.

Anything sound unreasonable?

Unhandled exception in lmdb64.dll

This might not be a real issue, but I had no other means of contacting you. Sorry for that.

I recently came across Lightning.NET and decided to give it a try as a low-latency history provider for ASP.NET project I'm working on. It must be shared between several separate applications so I thought Lightning.NET might do the trick
While using Lightning.NET I'm getting an unhandled exception in included lmdb64.dll that says:

Unhandled exception at 0x000000006C8C2371 (lmdb64.dll) in w3wp.exe: 0xC0000005: Access violation writing location 0x0000000000000008.

This is a code I'm using:
http://pastebin.com/vR53qFGM

When I run unit tests they pass unless I debug-test and wait for a while. Then the same exception occurs.

Am I missing anything? Perhaps I'm breaking some workflow rules of lmbd? Is the environment or database open for too long or too short? Is it a good idea at all to employ Singleton pattern here?
I'd be glad to receive any help.

Proposal: Get rid of Converters

I don't think they add a lot of value and add to quite a bit of complexity in the code-base throughout. The value provided can be simple extension method in the project that consumes this API.

Incompatibility with lmdb python tools

I want to use lmdb python tools https://lmdb.readthedocs.io/en/release/#command-line-tools to view stats about my database and so on (seems very useful), but unfortunetely Lightning.NET generated database is incompatibile with their binaries (or i assume it is). When i try to open database with this tools i get error:

  File "C:\Users\jsyty\AppData\Local\Programs\Python\Python35-32\lib\site-packages\lmdb\tool.py", line 592, in main
    max_dbs=opts.max_dbs, create=False)
lmdb.InvalidError: C:\LGBS\POC\PocGui\lmdbDatabase: MDB_INVALID: File is not an LMDB file

Could it be caused by diffrence in libraries versions? Python is working with LMDB 0.9.18 (https://lmdb.readthedocs.io/en/release/#changelog)

Add compability and targets to a solution to compile on *nix

Currently it is a post build event that runs a command that copies lmdb.dll binary to a LightningDB.Tests/bin directory. The command is windows command and the binary is a windows dll.
Should add a target to a solution that copies liblmdb.so binary to this directory as well as the binary to the repository.

DRY up the code

The Marshal'ing of values back and forth could be reduced to something implementing IDisposable as one example. I'll take a crack and submit a pull request first. I realize this type of thing is subjective, but I'll do my best not to step on your toes.

TryGet

public Boolean TryGet<TValue>(LightningDatabase db, TKey key,ref TValue value)
    {
        var keyBytes = this.GetKeyBytes(db, key);
        var valueBytes = db.Get(keyBytes);
        var ret = false;

        if (valueBytes != null) {
            value = this.GetValueFromBytes<TValue>(db, valueBytes);
            ret = true;
        } 
        return ret;
    }

How to get the database open flag before open

Hi there, I'm a newbie on Lightning .Net. I appreciate any help and/or comment

Can someone please give some hint to me if it is possible to know the open flags used to create a database? As a part of my learning, I am trying to list out all databases defined in a given folder and whether they allow duplicate or not

Here is my sample code in C#. In the first part I created an environment and "table1" and put 1 record to it. Then in the second part I pretend that I don't know if "table1" supports duplicate or not and try to read the "entry". How can I tell if "table1" supports duplicate or not?

Any comments are highly appreciated

Cheers,
Creambun

//===============================
Encoding enc = Encoding.UTF8;
const string STR_DATA_FOLDER_PATH = @"C:\temp\testData\DbFlag";

// creating the environment
using (var env = new LightningEnvironment(STR_DATA_FOLDER_PATH))
{
env.MapSize = 1024 * 1024 * 1024;
env.MaxDatabases = 100000;
env.MaxReaders = 10;

env.Open();

using (var txn = env.BeginTransaction())
{
    // create table "table1"
    using (var db = txn.OpenDatabase("table1", new DatabaseOptions { Encoding = enc, Flags = DatabaseOpenFlags.Create }))
    {
        // add 1 record into "table1"
        txn.Put(db, enc.GetBytes("Key1"), enc.GetBytes("Value1"));
        txn.Commit();

        db.Close();
        db.Dispose();
    }

    txn.Dispose();
}

env.Close();
env.Dispose();

}

// read back from the environment
using (var env = new LightningEnvironment(STR_DATA_FOLDER_PATH, EnvironmentOpenFlags.ReadOnly))
{
env.MapSize = 1024 * 1024 * 1024;
env.MaxDatabases = 100000;
env.MaxReaders = 10;

env.Open();

using (var txn = env.BeginTransaction(TransactionBeginFlags.ReadOnly))
{
    using (var db = txn.OpenDatabase(null, new DatabaseOptions { Encoding = Encoding.UTF8, Flags = DatabaseOpenFlags.None }))
    {
        // so db is the "master" db that contains all database names
        Assert.AreEqual(1, txn.GetEntriesCount(db));

        // I use a cursor to loop through all existing database "entries"
        using (var cursor = txn.CreateCursor(db))
        {
            var kvPair = cursor.MoveNext();
            Assert.IsNotNull(kvPair);
            Assert.IsTrue(kvPair.HasValue);

            // the name of my only database
            Assert.AreEqual("table1", enc.GetString(kvPair.Value.Key));

            // How can I convert "b" into the "DatabaseOpenFlags"?
            // I want to know if it is a database supporting duplicate or not
            byte[] b = kvPair.Value.Value;
        }
    }
}

}

Convert tests to NUnit for easier testing on Mono

I would like to setup a CI build for both Windows and Mono testing and to make the tests as easy as possible to run on both I would like to convert them to use NUnit. Also changing up the way some of them are written to eliminate the race condition failures that happen when running in a batch. Thoughts?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.