Giter VIP home page Giter VIP logo

appy-one / acebase Goto Github PK

View Code? Open in Web Editor NEW
459.0 16.0 25.0 6.55 MB

A fast, low memory, transactional, index & query enabled NoSQL database engine and server for node.js and browser with realtime data change notifications

License: MIT License

JavaScript 0.26% TypeScript 99.57% Shell 0.17%
database nodejs realtime-database angular browser electron firebase indexeddb nosql nosql-database

acebase's People

Contributors

appy-one avatar dependabot[bot] avatar donl avatar etekweb avatar futurgh avatar qpwo avatar willrogers007 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

acebase's Issues

Exported data is not type-safe by default

Data that has no JSON notation such as dates, binary data and path references are not exported with the type_safe flag set to true by default; the flags itself is not documented and the TS method definition does not allow it to be set. The flag should be set to true by default and allowed to be changed.

Example:

const ref = db.ref('typesafety');
await ref.set({ 
   text: 'Checking typesafety',
   date: new Date(),
   binary: new TextEncoder().encode('binary data'),
   reference: new PathReference('some/other/data')
});
let json = '';
await ref.export({ write: str => json += str });

Value of json: {"text":"Checking typesafety","date":"2021-12-31T11:55:14.380Z","binary":"<~@VK^gEd8d<@<>o~>","reference":"some/other/data"}

Expected (typesafe) value: {"text":"Checking typesafety","date":{".type":"Date",".val":"2021-12-31T11:55:14.380Z"},"binary":{".type":"Buffer",".val":"<~@VK^gEd8d<@<>o~>"},"reference":{".type":"PathReference",".val":"some/other/data"}}

Child events on arrays are fired on incorrect paths

If child events are bound to a path that is an array, the references in the callbacks will contain the wrong child paths

Example:

// Add subscription:
db.ref('chats/chat1/members_array').on('child_added', ref => {
   console.log(ref.path); // "chats/chat1/members_array/2" instead of "chats/chat1/members_array[2]"
});

Reflection on array child missing key property

Reflection on an Array value's child misses the key property, but it should contain the parent array's index

const info = await db.ref('some/array[2]').reflect('info');
console.log(info.key); // Should be 2, but is undefined

Filter on nested data

After reading through the docs, it seems there may not be a way to filter on nested data. For example, let's say I have a collection of items that looks like:

const myEntries = [
  {id: 1, nestedObject: { color: "blue" } },
  {id: 2, nestedObject: { color: "pink" } }
]

Can I perform a query for only the first entry based on the fact that it has a blue nestedObject? Based on the docs that should look something like:

db.query('myEntries')
  .filter('nestedObject.color', 'like', 'blue') // not currently possible?

But I don't think this works because the first argument of filter is described as a "key" not an object notation path. Is there something I'm missing or is this really not possible? Are there plans to support this type of query?

Indices with custom storage not working

I have written a custom storage class and it seems to be working just fine, except that if I try to create an index I get file access errors indicating it is trying to use non-custom storage for those operations, e.g.:

const db = new AceBase('testdb', { logLevel: 'error', storage: testStorage  })
await db.indexes.create("*", "sys/id")

yields:

  Error {
    code: 'ENOENT',
    errno: -2,
    path: './testdb.acebase/[undefined]-#-sys/id.idx.build',
    syscall: 'open',
    message: 'ENOENT: no such file or directory, open \'./testdb.acebase/[undefined]-#-sys/id.idx.build\'',
  }

You can see it seems to be trying to open a .acebase file but there isn't one in this case. Is this a known issue? Am I doing something wrong?

Null property values in event callbacks

When creating nodes that have properties with null values, events are triggered with the same objects, including the null property values.

Example:

const ref = db.ref('users')
ref.on('child_added', snap => {
    const user = snap.val();
    console.log(user.address); // null (expected: undefined)
});
ref.push({ name: 'Ewout', address: null });

null valued properties should be stripped from the objects for event callbacks

Error when invoking .set function

Feel free to close this if irrelevant. Mentioning possible fix.

[test] Index build /bounds//search_field (fulltext) started
[test] Reading node "/" from address 0,0
[test] Node "/" being updated: adding 1 keys ("bounds"), updating 0 keys (), removing 0 keys ()
[test] Node "/bounds/ckzavyllh000b3p61u4fhbn8v/search_field" saved at address 0,17 - 1 addresses, 68 bytes written in 1 chunk(s)
[test] Node "/bounds/ckzavyllh000b3p61u4fhbn8v" saved at address 0,18 - 2 addresses, 144 bytes written in 1 chunk(s)
[test] Node "/bounds" saved at address 0,20 - 1 addresses, 38 bytes written in 1 chunk(s)
[test] Node "/" saved at address 0,0 - 1 addresses, 34 bytes written in 1 chunk(s)
[test] Reading node "/bounds/ckzavyllh000b3p61u4fhbn8v" from address 0,18
[test] Reading node "/bounds/ckzavyllh000b3p61u4fhbn8v" from address 0,18
[test] Reading node "/bounds/ckzavyllh000b3p61u4fhbn8v/search_field" from address 0,17
[test] Can't find event subscriptions to stop (path: "bounds/ckzavyllh000b3p61u4fhbn8v", event: (any), callback: undefined)
[test] Reading node "/bounds/ckzavyllh000b3p61u4fhbn8v/search_field" from address 0,17
[test] Indexed "/bounds/ckzavyllh000b3p61u4fhbn8v/search_field" value: 'ckzavyllh000b3p61u4fhbn8v,stockpileinboundtresttres' (object)
[test] done writing values to ./test.acebase/bounds-search_field.fulltext.idx.build
{ key: 'ckzavylli000c3p613rorwn7x', cost: 1500, quantity: 1 }
[test] Index build /bounds/
/materials//key started
[test] Index build /bounds/
/materials/*/search_field (fulltext) started
[test] Reading node "/bounds/ckzavyllh000b3p61u4fhbn8v" from address 0,18
[test] done writing build file ./test.acebase/bounds-search_field.fulltext.idx.build
[test] Node "/bounds/ckzavyllh000b3p61u4fhbn8v" being updated: adding 1 keys ("materials"), updating 0 keys (), removing 0 keys ()
[test] Node "/bounds/ckzavyllh000b3p61u4fhbn8v/materials/ckzavylli000c3p613rorwn7x" saved at address 0,21 - 1 addresses, 113 bytes written in 1 chunk(s)
[test] Node "/bounds/ckzavyllh000b3p61u4fhbn8v/materials" saved at address 0,22 - 1 addresses, 38 bytes written in 1 chunk(s)
[test] Node "/bounds/ckzavyllh000b3p61u4fhbn8v" saved at address 0,18 - 2 addresses, 154 bytes written in 1 chunk(s)
C:\System\lenlen\lendb-server\node_modules\acebase\src\api-local.js:728
typeof oldValue === 'object' && Object.keys(oldValue).forEach(key => !seenKeys.includes(key) && seenKeys.push(key));
^

TypeError: Cannot convert undefined or null to object
at Function.keys ()
at Object.childChangedCallback [as callback] (C:\System\lenlen\lendb-server\node_modules\acebase\src\api-local.js:728:60)
at C:\System\lenlen\lendb-server\node_modules\acebase\src\storage.js:494:25
at Array.forEach ()
at Object.trigger (C:\System\lenlen\lendb-server\node_modules\acebase\src\storage.js:493:18)
at C:\System\lenlen\lendb-server\node_modules\acebase\src\storage.js:1066:51
at Array.forEach ()
at triggerAllEvents (C:\System\lenlen\lendb-server\node_modules\acebase\src\storage.js:1012:14)
at processTicksAndRejections (internal/process/task_queues.js:75:11)
[nodemon] app crashed - waiting for file changes before starting...

Querying on children

let say i have this collection.

//the collection sample
//parent collection here
stores = [
    {
        "some cuid": {
            "some props": "hello",
            // child collction
            products: [
                {
                    "some cuid": {
                        //some props
                    },
                },
            ],
        },
    },
]
//for example
db.ref("stores/some cuid").child('stores').push({
    // some props
})

how do i query products. i mean want to get all products.

Library is crashing while firing it's change event on query

I had set sort, filter and change listener at the node "/bpm";

function gotMatches(snaps) {
const data={};
snaps.forEach(snap => {
data[snap.key] = snap.val();
});
lis(data);
}
const valueChanged=() => {
db.get({include, exclude}).then(gotMatches).catch(err);
};
function matchAdded() {
valueChanged();
}
function matchChanged() {
valueChanged();
}
function matchRemoved() {
valueChanged();
}
db
.on('add', matchAdded)
.on('change', matchChanged)
.on('remove', matchRemoved);
valueChanged();

[database] Reading node "/bpm/kytk61qh001doazk0c46fvkr" from address 0,10
0|index | [database] Node "/bpm/kytk61qh001doazk0c46fvkr" being updated: adding 0 keys (), updating 1 keys ("value"), removing 0 keys ()
0|index | [database] Node "/bpm/kytk61qh001doazk0c46fvkr" saved at address 0,10 - 1 addresses, 50 bytes written in 1 chunk(s)
0|index | TypeError: Cannot convert undefined or null to object
0|index | at Function.keys ()
0|index | at Object.childChangedCallback [as callback] (/Local Tcp Server/node_modules/acebase/src/api-local.js:728:60)
0|index | at /Local Tcp Server/node_modules/acebase/src/storage.js:494:25
0|index | at Array.forEach ()
0|index | at Object.trigger (/Local Tcp Server/node_modules/acebase/src/storage.js:493:18)
0|index | at /Local Tcp Server/node_modules/acebase/src/storage.js:1066:51
0|index | at Array.forEach ()
0|index | at triggerAllEvents (/Local Tcp Server/node_modules/acebase/src/storage.js:1012:14)
0|index | at processTicksAndRejections (node:internal/process/task_queues:78:11)
PM2 | App [index:0] exited with code [1] via signal [SIGINT]

time out / internal server error when trying to get data

dear all,

i have the following function for requesting data from my acebase database:

const fs = require("fs");
const { AceBaseClient } = require("acebase-client");
const db = new AceBaseClient({
dbname: process.env.ACEBASE_DBNAME,
host: 'localhost',
port: process.env.ACEBASE_PORT,
https: false
});

db.root.get(
x => fs.writeFile("test.json", JSON.stringify(x.val(),null,4), y => {})
);

Thus far, this worked without problems when running it on my server to get my data. However, since today (and without making changes to the code), I get Websocket timeouts and an internal server error that I do not understand. Might it be related to an increased size of the data file i am trying to fetch and if so, is there a way to increase the timeout for requests?

many thanks,
philipp

image

image

Sqlite3 not working

sqlite3 not creating .sqlite3 file, only .acebase with data.db in it.

//index.js
const {AceBase,SQLiteStorageSettings} = require("acebase")
const db = new AceBase("mydb", new SQLiteStorageSettings({ path: "." }))
db.ready().then(()=>{
    console.log("ready")
})

{ "name": "acesqlite3", "version": "1.0.0", "description": "", "main": "index.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "keywords": [], "author": "", "license": "ISC", "dependencies": { "acebase": "^1.0.8", "sqlite3": "^5.0.2" } }

const issue in storage.js

eslint shows the following error (amongst others) in storage.js

  1476:29  error    'ret' is constant                                      no-const-assign

This also prevents vite -> esbuild from building.

Line 1473 needs to change from const ret to let ret

eslint shows quite a few errors across multiple source files.

I also noticed that there are no test scripts specified in any package.json and only one test file path-info.spec.js
Does AceBase have a comprehensive test suite? If not is this planned.

Use more than one CPU

Good morning,
I'm implementing a system with the acebase database, but we're facing a problem where the system makes several general listing queries, and some of these collections have 26 thousand records.

We put it on a server with 10vCPU, however we observed that the database only uses 1vCPU at a time. We are using pm2.
We tried to put it in cluster mode, but without success.

Is there any solution or alternative for us to take advantage of all the server's vCPU? Because with only 1vCPU, when we make two or three large simultaneous requests, the database crashes for a long time.

image

image

Transaction Logging error on instantiating AceBaseLocalSettings

I was trying to use transaction logging then got this error:
TypeError: Cannot set properties of undefined (setting 'transactions')
at new AceBaseLocalSettings (C:\System\lenlen\lendb-server\node_modules\acebase\src\acebase-local.js:20:42)
at new AceBase (C:\System\lenlen\lendb-server\node_modules\acebase\src\acebase-local.js:33:19)
at Object. (C:\System\lenlen\lendb-server\test\refupdates.spec.js:2:12)
at Module._compile (node:internal/modules/cjs/loader:1097:14)
at Object.Module._extensions..js (node:internal/modules/cjs/loader:1151:10)
at Module.load (node:internal/modules/cjs/loader:975:32)
at Function.Module._load (node:internal/modules/cjs/loader:822:12)
at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:77:12)
at node:internal/main/run_main_module:17:47

fortunately i tried to fix it at acebase-local.js line 19 with this:

if (this.storage && typeof options.transactions === 'object') {
    this.storage.transactions = options.transactions;
}

dont know it'll produce bad effect in the code i just need it to propagate sorted query without re-querying all over again.

Storing a complete object in a single db row/value.

I understand that Acebase stores certain objects properties in multiple db rows to speed up read/write performance. These include object properties whose value is longer than maxInlineValueSize and nested or child objects.

In the app I'm working on I store complex JSON documents in the db. Even a small document has resulted in 60 rows in the db. These JSON documents are always retrieved and saved in their entirety ie. as single JSON objects.

My concerns here are the considerable extra space required to save a document along with the overhead of breaking up and reassembling the documents. The current Browser DB I use, nanoSQL stores all values in a single db row.

Here is an example of part of an JSON Doc in IndexedDb:
image

I could JSON.stringify() etc. which should resolve this issue but that seems like unnecessary extra overhead and on the flip side doing JSON.parse(). The DB is updated frequently as the user is editing a document, so performance is important.

I think you can see where this is going.๐Ÿ˜€

Would you consider providing an option to store value's in a single db row. This could be an AceBase constructor option and/or a update() and set() or even a ref option. What do you think?

Docs for proxy.onMutation((mutationSnapshot, isRemoteChange) and possible issue.

I was running into an endless recursion issue with an onMutation() callback. I finally worked out that I needed to skip my code in the callback when isRemoteChange is true.

ie. The onMutation() callback was called twice for a single mutation change. Once with isRemoteChange false and again with it true.

I can't find any info on isRemoteChange other than the example code???

A bit more detail. I'd set someProxy.stop = true when it was false. Then I'm setting it to true again which data-proxy.ts code ignores, because the value hasn't changed. I'm simply adding this info in case it is relevant.

(React TypeScript) TypeError: DataIndex is not a constructor

Hello! I've been encountering this problem every time I try to create an index for my database. I've been using AceBase on the browser-side with IndexedDB, and recreated this problem on Firefox, Chrome, and Opera. Here are the steps to recreate this problem:

  1. npx create-react-app test-acebase --template typescript
  2. Edit App.tsx to include these lines:
function App() {
  const db = AceBase.WithIndexedDB("test-db");
  db.indexes.create("users", "name");

  return (
  ...
  1. npm start
  2. The error DataIndex is not a constructor should show up on the browser.

Is indexing not meant to be used on the browser side? Or is there something else going on here? Thank you!

Exporting strings with multiple slashes

When exporting strings that contains backslashes, the first one is doubled in the exported value:

var { AceBase } = require("acebase")
const db = new AceBase('test');
await db.ref('export').set({ text: 'Strings with multiple \\ backslashes \\ are not \\ exported ok' });
let json = '';
await db.ref('export').export({ write: str => json += str });

After execution, the value of json is {"text":"Strings with multiple \\ backslashes \ are not \ exported ok"}.
The expected value is {"text":"Strings with multiple \ backslashes \ are not \ exported ok"}.

The issue is caused by the escape function in the exportNode method in storage.js, which does a regexp replace on the first occurrence of a backslash

Will be fixed soon

Change tracking improvement

The code that does change tracking currently clones the entire previous stored value to prepare previous/new values to trigger change events and index updates with. Because change events might be on higher paths than the data being updated, cloning the data costs memory and cpu, while most of it isn't even changed. Even worse, it also does this when target data is being deleted. It would be better to create shallow copies of the original data (new objects whose properties point to the existing in-memory child objects), and recursively shallow copy the data up to the path being changed, only changing that data. Tracking changes this way is also faster because most properties will point to the same data as the previous value's properties, eliminating the need to traverse the tree to find changes - they'll point to the same data: previous.someObject === current.someObject

Generating unique keys for child objects

Hello! If I have this schema:

schema.set("users/$userId", {
    name: "string",
    "pets?": {
        "$petId": {
            name: "string",
            "awards?": {
                "$awardId": {
                    name: "string"
                }
            }
        }
    }
});

which are mapped to these models:

class User {
    public name: string;
    public pets?: {
        [key: string]: Pet;
    }
}

class Pet {
    public name: string;
    public awards?: {
        [key: string]: Award;
    }
}

class Award {
    public name: string;
}

Is there a way to auto-generate all the child keys simply by pushing a filled in top-most node into the database, like so?

const award = new Award();
award.name = "Some Award";

const pet = new Pet();
pet.name = "Some Pet";
pet.awards = {
    "*": award
};

const user = new User();
user.name = "Some User";
user.pets = {
    "*": pet
};

await db.ref("users").push(user);

Currently, the only way I could see to do this is to have this monstrosity of a code just to give the child nodes their own auto-generated keys:

const award = new Award();
award.name = "Some Award";

const pet = new Pet();
pet.name = "Some Pet";

const user = new User();
user.name = "Some User";

db.ref("users")
    .push(user)
    .then(ref => {
        db.ref(`users/${ref.key}/pets`)
            .push(pet)
            .then(ref2 => {
                db.ref(`users/${ref.key}/pets/${ref2.key}/awards`)
                    .push(award);
            });
    });

Is there a better way of achieving this? Thanks!


Edit: Seems I was too tired last night to realized this is also possible:

const award = new Award();
award.name = "Some Award";

const pet = new Pet();
pet.name = "Some Pet";

const user = new User();
user.name = "Some User";

const userRef = await db.ref("users").push(user);
const petRef = await db.ref(`users/${userRef.key}/pets`).push(pet);
const awardRef = await db.ref(`users/${userRef.key}/pets/${petRef.key}/awards`).push(award);

But the problem is still there. This solution really doesn't scale well, once nodes get deeper further down. It would be great if there's a way to automatically generate unique keys for child nodes in object collections, preferably a feature in this library by default.

How to speed up loading in 4GB CSV?

Hey I'm wondering if I'm doing something wrong. I have a 4GB CSV file where each line is points one string to a list of strings. I have about 3 million lines.

  • I tried setting them one at a time and it was taking a while.
  • I noticed readme recommends doing large batches with update(). So I'm batching my updates into groups of 10k rows, but each batch is still taking a minute or two, so the whole file is going to take five-ten hours at this rate. I can't do the whole thing at once because I'll run out of RAM.
  • I have logging set to error only
  • I think node's memory should be 12gb because I set --max-old-space-size=12000
  • Node is running in Terminal.app, not vscode's integrated terminal because I've heard that can slow processes
  • Loading the whole file into object batches without doing acebase only took a couple minutes
  • After each batch update completes I get a bunch of these messages:

write lock on path "whatever" by tid whatever (_lockAndWrite "whatever") is taking a long time to complete [1]

Here's the main bit of code:

  let giantObj: Record<string, string[]> = {}

  await processLines(path + '/data.tsv', async (line, num) => {
      if (num % 10_000 === 0) {
          log('on line', frac(num, numStargazerRows))
          log('putting batch in database:')
          await db.ref('gazers').update(giantObj)
          log('batch done')
          log('freeing old obj hopefully')
          giantObj = {}
      }
      const cols = line.split('\t')
      const head = cols[0].replace('/', '!!')
      const tail = cols.slice(1)
      giantObj[head] = tail
  })

Any suggestions for loading the data in faster?

Schema definition not enforcing policy on child paths

When a schema is defined on a certain path, it is not enforcing rules (preventing invalid updates) on target child paths. All works well if data is updated on higher or same paths as the schema definition, but not on deeper paths.

Example:

db.schema.set('clients/*',  { 
   name: 'string',
   contacts: {
      '*': {
         type: 'string',
         name: 'string', 
         email: 'string'
      }
});

let result = db.schema.check('clients/client1', { name: 'Some client', contacts: { contact1: 'invalid contact' } }, false);
// result.ok === false, desired result

result = db.schema.check('clients/client1/contact/contact1', 'invalid contact', false);
// result.ok === true, which is wrong

Fix is on its way

Incorrect query results

I have a db with the following content:

songs: {
  klpmmym80000ggkkuzd3dtg1: {
    title: 'life on the Edge',
    artist: 'John Mayer',
    year: 2014,
    genre: 'pop',
    sex: 'none'
  },
  klpmqzur0000vgkkpn0g4uho: {
    title: 'Daughters',
    artist: 'John Mayer',
    year: 2014,
    genre: 'pop'
  },
  klpmtqrx0000xokkvim8n26w: {
    title: 'Crazy',
    artist: 'Gnarls Barkley',
    year: 2012,
    genre: 'rock'
  },
  klppvq780000hkkkvwdt4saf: {
    title: 'Lovely Sky',
    artist: 'Martha Smith',
    year: 2012,
    genre: 'folk'
  },
  klppvq7a0001hkkkgc27oea1: {
    title: 'Skylark',
    artist: 'Macey Grey',
    year: 2014,
    genre: 'pop'
  }
}

And these indexes:

db.indexes.create('songs', 'year')
db.indexes.create('songs', 'genre')
db.indexes.create('songs', 'year', { include: ['genre'] })

When I run this query:

db.query( 'songs' )
        .filter( 'year', '>=', 2012 )
        .filter( 'genre', 'in', ['jazz','rock','blues'])
        .sort( 'title' )
        .get()

I get 5 songs:

snapshot.val: { title: 'Crazy', artist: 'Gnarls Barkley', year: 2012, genre: 'rock' }
snapshot.val: { title: 'Daughters', artist: 'John Mayer', year: 2014, genre: 'pop' }
snapshot.val: { title: 'Lovely Sky',  artist: 'Martha Smith',  year: 2012,  genre: 'folk'}
snapshot.val: { title: 'Skylark', artist: 'Macey Grey', year: 2014, genre: 'pop' }
snapshot.val: {  title: 'life on the Edge',  artist: 'John Mayer',  year: 2014,  genre: 'pop',  sex: 'none'}

When only the first one should match. If I do this without the last (composite) index I get the correct results.

It may well be that this combination of indexes is invalid or doesn't make sense!

Realtime synchronization with a live data proxy not persisting to db

I've written code based on https://github.com/appy-one/acebase#realtime-synchronization-with-a-live-data-proxy and have traced through the AceBase code in data-proxy.ts and can see the underlying object get changed in set(target, prop, value, receiver) however nothing is written to the database and there are no errors.

set() calls context.flag('write', context.target.concat(prop)); which calls scheduleSync() but db isn't updated.

The specific property is a string assignment.

This is using IndexedDb in the Browser.

B+Tree lookup breaks

When an empty leaf is removed (see #5 ), the parent node does not get updated correctly, causing some tree lookups to fail. The problem is that the gt/lt child offset is updated instead of the index. Fix will be in following commit

TypeError: Unknown chunk type 91 while reading record at

Hi Acebase team, thanks for your great product.

I've run into a problem during my tests of Acebase server ([email protected] running on Nodejs).
I've created a new database and have a script running that stores data in database.

After a while my server kept crashing and I noticed this kind of error (see below).
I'm not able to run any queries or delete data. This causes crash of my server script and this error appears in the server log.
NodeReader.getValue:child error: TypeError: Unknown chunk type 91 while reading record at "/items/0/3/16766808/item_data" @197,405
at NodeReader.readHeader (/......./node_modules/acebase/src/storage-acebase.js:2575:23)
at async NodeReader.getValue (/......./node_modules/acebase/src/storage-acebase.js:1843:13)
at async loadChildValue (/......./node_modules/acebase/src/storage-acebase.js:1919:37)
at async Promise.all (index 2)
at async NodeReader.getValue (/......./node_modules/acebase/src/storage-acebase.js:1963:21)
at async loadChildValue (/......./node_modules/acebase/src/storage-acebase.js:1919:37)

This is what I tried:
server.db.query('items')
.remove(() => {
// Old junk gone
});

Then I was trying to delete the node directly:

server.db.ref('items/0/3/16766808/item_data')
.remove()
.then(() => {

     }); 

This was the error:
/......./node_modules/acebase/src/storage-acebase.js:2575
throw new TypeError(Unknown chunk type ${type} while reading record at ${this.address});
^

TypeError: Unknown chunk type 91 while reading record at "/items/0/3/16766808/item_data" @197,405
at NodeReader.readHeader (/......./node_modules/acebase/src/storage-acebase.js:2575:23)
at async NodeReader.getAllocation (/......./node_modules/acebase/src/storage-acebase.js:1773:9)
at async /......./node_modules/acebase/src/storage-acebase.js:2688:36
at async Promise.all (index 0)
at async _mergeNode (/......./node_modules/acebase/src/storage-acebase.js:2701:5)
at async Object.write [as _customWriteFunction] (/......./node_modules/acebase/src/storage-acebase.js:1420:28)
at async AceBaseStorage._updateNode (/......./node_modules/acebase/src/storage-acebase.js:1433:26)
at async AceBaseStorage.setNode (/......./node_modules/acebase/src/storage-acebase.js:1349:9)
at async DataReference.set (/......./node_modules/acebase-core/dist/data-reference.js:158:13)
error: Forever detected script exited with code: 1

I can't get count of children

I can't get count of children
come out to me undefined

import { AceBase } from 'acebase'
const db = AceBase.WithIndexedDB('my_db'); // browser

function fPush(){
db.ref('game/config').push({name: 'name 1' , test : 1})
}

async function getCount(){
  const valCount = await db.ref('game/config').count()
  console.log(valCount );  // undefined
}

IndexedDb extra rows

I have ab object such as:

user: {
        emailAddr: "[email protected]"
        passwordHash: "$2a$10$/adc6zcERXdkTnSXNEQ6suac1S48tK3PsfPqgszBN5PUTN1hnG2Ia"
        state: "signedIn"
        token: "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJzdWIiOiI2aXhvUkpqdHhsVjk1QUZkYVFiUlAiLCJwYXNzd29yZEhhc2giOiIkMmEkMTAkL2FkYzZ6Y0VSWGR"
        userId: "6ixoRJ"
        userName: "blah"
}

When I do db.ref( path ).update( user ) IndexedDb content table has three rows:

"clibu_users/6ixoRJjtxlV95AFdaQbRP": {userId: "6ixoRJjtxlV95AFdaQbRP", userName: "associated_sapphire_hippopotamus", emailAddr: "[email protected]", state: "signedIn", id: "6ixoRJjtxlV95AFdaQbRP"}
"clibu_users/6ixoRJjtxlV95AFdaQbRP/passwordHash": "$2a$10$/adc6zcERXdkTnSXNEQ6suac1S48tK3PsfPqgszBN5PUTN1hnG2Ia"
"clibu_users/6ixoRJjtxlV95AFdaQbRP/token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJzdWIiOiI2aXhvU

passwordHash and token are stored in extra rows using there key is in the path and just their value in the Value clm.

I'm curious as to why it works this way and am concerned about the extra rows wasting space.

Along similar lines the nodes also has these extra rows.

Failed to add to transaction log: Cannot store arrays with missing entries

When the value of an array is updated with transaction logging switched on, the error "Failed to add to transaction log: Cannot store arrays with missing entries" will be thrown. This is because AceBase does not store sparse arrays, and updates to array values (being written to the transaction log) are always sparse. To fix this, those mutations in the transaction log will have to be either stored as an object or string.

Indexed query with double filter on key fails

When executing a query with multiple filters on an indexed key, execution will return wrong results if the index has include keys, or crash with an TypeError: Cannot read properties of undefined (reading '[key]') if the index has no include keys.

Example:

await db.indexes.create('meteorites', 'name'); // Create index on name, no additional included keys
const snaps = await db.query('meteorites')
    .filter('name', '!like', 'L*')
    .filter('name', '!=', 'Acapulco') // Another filter on name causes the crash
    .get();

This is caused by initial index results being filtered on metadata (include) filters, but the index' key is (obviously) not in the metadata. Solution: filter index results on the indexed value if key equals index.key

count crashes if no node has no children

db.ref('songs').count(); will raise an exception if the 'songs' node has no children.

data-reference.ts

    count() {
        return this.reflect("info", { child_count: true })
        .then(info => {
            return info.children.count;  <<<-- HERE
        })
    }

Live proxy: Uncaught (in promise) TypeError: Cannot read property 'push' of undefined

I was trying to live proxy using the following your example

//server/index.js
const { AceBaseServer } = require("acebase-server")
const dbname = "mydb"
const server = new AceBaseServer(dbname, {
    host: "192.168.43.50",
    port: 5757,
    authentication: {
        enabled: false,
        allowUserSignup: false,
        defaultAccessRule: "allow",
        defaultAdminPassword: "75sdDSFg37w5",
    },
})
server.on("ready", () => {
    console.log("SERVER ready")
})

but changed a bit of your example. becase i dont want to wrap to async.

//client/index.js
import { AceBaseClient } from "acebase-client"
    const db = new AceBaseClient({
        host: "192.168.43.50",
        port: 5757,
        dbname: "mydb",
        https: false,
    })
    db.ready(() => {
        console.log("Connected successfully")
    })
    let liveChat = {}
    db.ref("chats/chat1")
        .proxy({})
        .then((r) => {
            liveChat = r.value
            liveChat.title = "Live Data Proxies Rock! ๐Ÿš€"
            liveChat.members = ["ewout", "john", "pete", "jack"]
            liveChat.messages.push({
                from: "ewout",
                text: "Updating a database was never this easy",
            }) //throws: Uncaught (in promise) TypeError: Cannot read property 'push' of undefined
        })
    if (liveChat?.onChange)
        liveChat.onChange(function (val, prev, remoteChange, context) {
            // Handle specific (local or remote) changes:
            if (val.title !== prev.title && remoteChange) {
                console.log(`Title was changed by someone else`)
            }
            if (prev.members.includes("ewout") && !val.members.includes("ewout")) {
                console.log(remoteChange ? `I was kicked out of this chat` : `I stepped out`)
            }
        })

    function sendMessage(text) {
        // Changes made remotely are automatically updated in our liveChat object:
        if (!liveChat.members.includes("ewout")) {
            throw new Error(`Can't send message, I'm not a member anymore!`)
        }
        liveChat.messages.push({
            from: "ewout",
            text,
        })
    }

liveChat.messages.push throws
Uncaught (in promise) TypeError: Cannot read property 'push' of undefined

Enable es6 import in Node.js

I'm writing all my Node.js code using es6 import's however AceBase currently needs the following hack for import to work.

import pkg from 'acebase';
const { AceBase } = pkg;

Double event callbacks using on(event, callback) syntax

Events 'value', 'child_added' and 'notify_child_added' are fired twice for each value when using ref.on(event, callback) syntax.
To reproduce:

db.ref('messages').on('value', snap => {
   console.log('value callback'); // Prints twice
});

db.ref('messages').on('child_added', snap => {
   console.log('child_added callback'); // Prints twice for each child
});

db.ref('messages').on('notify_child_added', ref => {
   console.log('notify_child_added callback'); // Prints twice for each child
});

This does not happen when using ref.on(event).subscribe(callback)

RangeError: Offset is outside the bounds of the DataView

While performing automated tests to solve #65, this error appeared: RangeError: Offset is outside the bounds of the DataView in storage-acebase.js:602.

While debugging this, I found out that my bulk data import test causes the FST (Free Space Table) to be flooded with tiny ranges of 2 records each, because they are not being allocated in my particular tests. At one point, there are just too many ranges to store in the FST and it fails. Funny detail is that I anticipated on this ever happening (line 613) but did not realize the actual crash would be before my own check..

There's space for 8191 ranges in the FST. I'm going to fix this by enforcing this limit, removing the smallest ranges when needed. That means some free space in the db file will be lost forever, but the chance of this actually happening in a real-world database is fairly small.

Query questions

I see I can query() using like which is case-insensitive however I can't see a way to perform a case-insensitive sort.

Is it possible to use our own custom query filter function. Something like filter(key, operator, compare) where operator would be a function we'd provide and you'd pass it key and compare. In nanoSQL you can add query functions.

Finally what is the best way to iterate all nodes in a path, one at a time which is performant and doesn't uses lots of memory (doesn't cache). ex. Like db.query('songs') to get every node, one at a time.

Adding a 2.5MB value gives "offset is out of bounds" error

I'm adding a value which is a "data:image/jpeg;base64,/...." which is approximately 2.5MB to AceBase v1.5.0 running in Node v16 and get the following repeatable error:

offset is out of bounds at 2021-05-17 14:33:55
        at Uint8Array.set (<anonymous>)
        at c:\webapps\clibuNotes\node_modules\acebase\src\storage-acebase.js:1526:21
        at c:\webapps\clibuNotes\node_modules\acebase\src\storage-acebase.js:1807:41

followed by non-stop:

read lock on path "/clibu_notes/kvigX6oc4vGkT9V4Gdqy7" by tid 2 (Node.getValue "/clibu_notes/kvigX6oc4vGkT9V4Gdqy7") is taking a long time to
complete [1]
...

Error: Unexpected token in storage-acebase.js

Hello. I faced the problem in version 1.12.3.
Node v13.10.1 on Windows 10

...\node_modules\acebase\src\storage-acebase.js:979
if (this.settings.transactions?.log !== true) { throw new Error('Transaction logging is not enabled'); }
^

SyntaxError: Unexpected token '.'
at wrapSafe (internal/modules/cjs/loader.js:1063:16)
at Module._compile (internal/modules/cjs/loader.js:1111:27)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:1167:10)
at Module.load (internal/modules/cjs/loader.js:996:32)
at Function.Module._load (internal/modules/cjs/loader.js:896:14)
at Module.require (internal/modules/cjs/loader.js:1036:19)
at require (internal/modules/cjs/helpers.js:72:18)
at Object. (D:\PROJECTS\njs-querize\node_modules\acebase\src\api-local.js:3:52)
at Module._compile (internal/modules/cjs/loader.js:1147:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:1167:10)

Mixing update() etc. with a live data proxy?

From what I'm seeing if you call update() on a path that has a Proxy then the proxy isn't aware of the update changes.

Assuming this isn't a bug and is too difficult to resolve that's fine, however the doc's need to boldly point out that you can't mix the two approaches.

Safari IPC issue "TypeError: undefined is not an object ("evaluating 'this.channel.postMessage')"

Safari (on Mac & iOS) doesn't support BroadcastChannel, which is checked for in the browser IPCPeer constructor but not in its sendMessage method, which is actively used by AceBase.

This results in error "TypeError: undefined is not an object ("evaluating 'this.channel.postMessage')" and the database not being created correctly / working properly.

This affects Safari on Mac, and all browsers on iOS (they all use Safari behind the scenes)

B+Tree empty leaf issue

Bug: If a B+Tree leaf becomes empty (all its entries have been removed), iterating the tree fails on leaf.getNext() because a console.assert assumes there are entries in the leaf.

Solution: Change console.assert to allow 0 leaf entries

Enhancement: While having empty leafs is ok, it would be nicer if they were removed once they become empty.

storage-custom.js:1142 Assertion failed: Merging child values can only be done if existing and current values are both an array or object

I got this assertion failure in CustomStorage.getNode(). It was trying to merge a standalone string value (stored in its own record because of its size) with the parent object, which also has an (old, shorter) string value stored inline. Apparently in some cases, if a large string is saved in its own record, it is not removed when the value is overwritten by a shorter string (being stored in its parent record) - causing 2 versions being saved and clashing in getNode.

This problem occurs in CustomStorage classes, which is used for IndexedDB and LocalStorage in the browser. Normal AceBase databases are not affected.

To reproduce, run in the browser:

const { AceBase } = require('acebase');
const db = AceBase.WithIndexedDB('test');
db.ready(async () => {
   await db.ref('test').set({ str: 'Large string value that will be stored in its own record because it has more than 50 chars' });
   await db.ref('test').set({ str: 'Short string' }); // will be saved in parent record
   const test = await db.ref('test').get(); // <-- assertion failure
});

I've found the code causing this issue, fix is on its way.

Locking improvement

Locking is now implemented by only allowing 1 write at the same time, while also denying read requests during writes. Back in the old days, locking was path based: if 2 paths did not cross they were allowed to proceed in parallel. This was disabled later on because it resulted in deadlocks in particular cases. While only allowing 1 write lock at a time is ok, many read requests should be allowed to proceed during writes. Even better if deadlocks could be prevented; multiple simultaneous writes would become possible again. This has to be investigated.

Exception with filters

Running the following query

db.query('songs')
    .filter('year', '==', 2014)                     // uses the index on year
    .filter('genre', 'in', ['jazz','rock','blues']) // uses the index on genre
    .get();

where there is a match on filter( year but no match on filter( genre does:

Using indexes for query: /songs/*/year, /songs/*/genre
(node:54336) UnhandledPromiseRejectionWarning: Error: TypeError: Cannot read property 'value' of undefined
    at C:\webapps\play2\AceBase\node_modules\acebase-core\dist\data-reference.js:795:19

If I change filter( genre so it does match a doc in 2014 it works as expected.

It doesn't matter whether indexes are used or not.

Locking issues after update from 1.5.0 to >=1.6.0 (bug?)

Hi,

After updating acebase from 1.5.0 to 1.6.0 and higher I'm experiencing issues with "locking".

I didn't change my code and have no idea how to deal with those locking issues right now.

On 1.5.0 everything working really fast almost instantly, but on 1.6.0+ same functions take even 1 minute for lightweight tasks with read lock on path ... in the console.

[localtest] Reading node "/user/1365174619/account" from address 2,60
[localtest] Reading node "/user/1365174619/account" from address 2,60
[localtest] Reading node "/user/1365174619/products/active" from address 1,789
[localtest] Reading node "/user/1365174619/account" from address 2,60
read lock on path "/" by tid 1 (Node.getInfo "/") is taking a long time to complete [1]
read lock on path "/" by tid 1 (Node.getInfo "/") is taking a long time to complete [2]
read lock on path "/" by tid 1 (Node.getInfo "/") is taking a long time to complete [3]
lock :: read lock on path "/" by tid 1 (Node.getInfo "/") took too long
[localtest] Node "/user/1365174619" being updated: adding 0 keys (), updating 0 keys (), removing 1 keys ("account")
[localtest] Node "/user/1365174619" saved at address 31,646 - 1 addresses, 24 bytes written in 1 chunk(s)

Are there any examples of "do this / don't do that" to avoid locking issues?

Node: 15.14.0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.