Giter VIP home page Giter VIP logo

nano-sql's People

Contributors

canrau avatar heri16 avatar jesuso avatar mattleff avatar styfle avatar vikrantpogula avatar vladimiry avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nano-sql's Issues

Question: about range and limit

your document shows:

The .range() has the same affect as .offset() and .limit()

my source code is:

nSQL("users")
    .query("select", ["id", "username"])
    .range(3, 1)
    .exec()
    .then(result => {
      console.log(result);
    });

it returns four records.but

nSQL(usersTab)
    .query("select", ["id", "username"])
    .limit(3)
    .offset(1)
    .exec()
    .then(result => {
      console.log(result);
    });

It returns three records.
The result seems not the same affect.

Add .npmignore

Did we mean to distribute png files?

  • logo.png
  • logo_full.png
  • examples/**
  • webpack.config.js
  • tslint.json
  • tsconfig.json

ORM select query not working both ways

I have an application with two tables named videos and producers respectively. The models are as follows:

nSQL('videos')
  .model([
    { key: 'id', type: 'int', props: ['pk', 'ai'] },
    { key: 'name', type: 'string' },
    { key: 'producerId', type: 'producers', props: ['ref=>videoIds[]'] }
  ]);
nSQL('producers')
  .model([
    { key: 'id', type: 'int', props: ['pk', 'ai'] },
    { key: 'name', type: 'string' },
    { key: 'videoIds', type: 'videos[]', props: ['ref=>producerId'] }
  ]);

It is a one-to-many relationship, that means each producer has many videos and each video has one producer. Inserting entries works like a charm, my test data looks as follows inside IndexedDB:

videos Table:

Key Value
1 { id: 1, name: "test1", producerId: 1 }
2 { id: 2, name: "test2", producerId: 1 }
3 { id: 3, name: "test3", producerId: 2 }

producers Table:

Key Value
1 { id: 1, name: "prod1", videoIds: [1, 2] }
2 { id: 2, name: "prod2", videoIds: [3] }

Now I want to request an entry from the videos table with its respective producer in the result as follows:

nSQL('videos')
  .query('select')
  .orm(['producerId'])
  .where(['id', '=', 1])
  .exec();

The result however looks like this:

{
  id: 1,
  name: "test1",
  producerId: 1
}

If I try it the other way around it does work:

nSQL('producers')
  .query('select')
  .orm([videoIds'])
  .where(['id', '=', '1'])
  .exec();

The result is:

{
  id: 1,
  name: "prod1",
  videoIds: [
    { id: 1, name: "test1", producerId: 1 },
    { id: 2, name: "test2", producerId: 2 }
  ]
}

I already tried and tested a lot on my end but can't seem to find a bug in my application. Also, I copied and ran the example from the official documentation which gave the same results, meaning the author variable in that case was still the user id rather than the user object.

Am I doing something wrong or is there possibly a bug?

Duplicate upsert

when the following function gets called the result is an array containing 2 identical inserts.

function addNewBlock(obj) {
  console.log('ran addNewBlock. obj: ',obj);
  nSQL('aniblocks')
  .doAction('add_new_block',{aniblock:obj}).then(function(result, db) {
    console.log(result);
    return db.getView('get_block_by_blockID',obj);
  }).then(function(result, db) {
    console.log('new block: ',result);
    return result;
  });
}

This is how nano-sql is being set up in that file:

nSQL('aniblocks')
.model([
  {key:'id',type:'uuid',props:['pk']},
  {key:'blockID', type: 'string', props: ["idx"]},
  {key:'blockInnerHTML', type: 'string'},
  {key:'foDom', type: 'string'},
  {key:'blockStartTime', type: 'float'},
  {key:'blockTimeScale', type: 'float'},
  {key:'title',type:'string'},
  {key:'type',type:'string'},
  {key:'settings',type:'map', default: null},
  // {key:'funcStr',type:'string', default: null},
  // {key:'removeSelector',type:'string', default: null},
  {key:'timelineData',type:'blob'}
])
.actions([
  {
    name: 'add_new_block',
    args: ['aniblock:map'],
    call: function(args,db) {
      console.log('ran add_new_block. args: ',args);
      return db.query('upsert',args.aniblock).exec();
    }
  },
  {
    name: 'update_block',
    args: ['aniblock:map'],
    call: function(args,db) {
      var D = args.aniblock;
      // console.log(args.aniblock);
      return db.query('upsert',D).where(['blockID','=',D.blockID]).exec();
    }
  },
  {
    name: 'delete_block',
    args: ['aniblock:map'],
    call: function(args,db) {
      var D = args.aniblock;
      return db.query('delete',D).where(['blockID','=',D.blockID]).exec();
    }
  }
])
.views([
  {
    name: 'get_block_by_blockID',
    args: ['blockID:string'],
    call: function(args,db) {
      console.log('ran get_block_by_blockID. args: ',args);
      return db.query('select').where(['blockID','=', args.blockID]).exec();
    }
  },
  {
    name: 'list_all_blocks',
    args: ['blockID:string'],
    call: function(args,db) {
      console.log('ran list_all_blocks');
      return db.query('select').exec();
    }
  }
])
.on('error', function(eventData){
  console.log(eventData);
})
// .config({persistent:true})
.connect().then(function(result,db){
  connected = true;
});

If I run list_all_blocks both of the items are returned. Any idea what I'm doing wrong?

Remove leveldown dependency

Would like to use nanosql with RocksDB, which is a successor of Leveldown. Wonder whether we could remove the leveldown dependency to further reduce package size.

blob type no longer supported?

My table was set up with blob type for one of the fields. It works like that on Mac but in Windows on insert it becomes a null value. Just a question. I changed it to map and it's working in Windows now.

support for protocol buffers?

Hello this looks great, thanks very much. I'm using the levelDB setting - as I'd like to save a bunch of records via Node.js - then copy it all over to my server for read only access from my serverless single page javascript app.

Look like I'll be able to do that nicely with this - but I'm also planning to use protocol buffers instead of json, as I have a lot of data - I've seen some other Node.js libraries that save to leveldb with this - and I think I can just decode/encode with yours - but have you thought of adding support for this?

Thanks again.

Getting last item created

Hi, great job with the library, I got a question:

Is there a way to obtain the inserted data on upsert?, I'm trying to create a library on top of Nano-SQL to make 'models' and 'relations' handling easier, something like this:

Person.create({ name: 'alice', age: 21 }).then(person => {
  // person: Person { id: 1, name: 'alice'... }
  person.
    .pets()
    .create({ name: 'Max', nickname: 'Chew Barka' }).then(pet => {
      // pet: Pet { id: 1, name: 'Max'... }
      console.log('Added ' + pet.name + ' to ' person.name + ' pet list')
    })
})

I would like to accomplish such task with something like this:

class Person {
  static async create (attributes) {
    return new Promise((res, rej) => {
      nSQL('users')
        .query('upsert', attributes)
        .exec()
        .then((result, db) => {
          // Grab the last inserted record and inflate it
          res(result) // [ { msg: '1 row(s) inserted' } ]

          // Ideally something like this should happen:
          // Return an instance of this class with the newly created data
          res(new this(result))
        })
  }
}

The problem I have is that on upsert I get the result mentioned above ([ { msg: '1 row(s) inserted' } ]): and I find it difficult to retrieve the newly created model. I could simply instance the model using the passed 'attributes' but this would skip data created by the database such as the id and defaulted attributes not passed.

Thanks in advice, keep up the good work!

nSQL: Database not connected, can't do a query!

When I enable history my app errors with "nSQL: Database not connected, can't do a query!"
When disabling history, the app works as expected.

I extracted a small example to demonstrate the issue...

I am using version 1.5.1

import {nSQL} from "nano-sql";

nSQL('users')
    .model([
        {key:'id',type:'int',props:['pk','ai']},
        {key:'name',type:'string'}
    ])
    .actions([
        {
            name: 'createUser',
            args: ['name:string'],
            call: async (opts, db) => await db.query('upsert', {name: opts.name}).exec()
        }
    ])
    .views([
        {
            name: 'allUsers',
            args: [],
            call: async (opts, db) => await db.query('select').exec()
        }
    ]);

async function test(history) {
    await nSQL()
        .config({
            history
        })
        .connect();
    const users = nSQL('users');
    await users.doAction('createUser', {name: 'bill'});
    await users.doAction('createUser', {name: 'bob'});
    await users.doAction('createUser', {name: 'jeb'});
    console.log(await users.getView('allUsers'));
}

// when ENABLE_HISTORY === true -> nSQL: Database not connected, can't do a query!
// when ENABLE_HISTORY === false -> everything works as expected
const ENABLE_HISTORY = true; 

test(ENABLE_HISTORY).catch(err => console.error(err));

Blob workers

I'm writing a project, that I compile with webpack into a library, that then I include into a different project.

I'm facing the following problem when I run my project as part of the other project.
Chrome throws this error.

index.js:formatted:1 Refused to create a worker from 'blob:http://localhost:8065/3e7b2f07-2997-4bc0-b8c7-1be29c92d938' because it violates the following Content Security Policy directive: "script-src 'self' cdn.segment.com/analytics.js/ 'unsafe-eval'". Note that 'worker-src' was not explicitly set, so 'script-src' is used as a fallback.

the line of code that throws it is this (minified)
var t = new Worker(window.URL.createObjectURL(new Blob(["var t = 't';"])));
inside _detectStorageMethod

I'm not sure why I don't face this problem when I'm testing my own project separately.

The solution is to send proper headers of course, but I'm wondering what's going on.
Thoughts?

Best wishes,
Anton

Comparisons

How does this library compare with Alasql?

How about comparisons with lovefield?

Write performance with secondary indices under Node with LevelDB store

I’m seeing writes on Node with the LevelDB store take 10x-30x longer to complete when a secondary index is set in the code below (actual performance varies depending on which column the index is set on. Also, it doesn’t seem to work when set on the “done” column). Given that write speeds otherwise appear to increase linearly, this is surprising. I would expect a ~2x increase in time for adding an index.

Thoughts?

const { nSQL } = require('nano-sql')

let timer = null
function resetTimer () {
  timer = new Date()
}
function displayDurationFor (taskName) {
  let duration = (new Date()) - timer
  console.log(`${taskName} took ${duration}ms.`)
}

async function start () {
  const connectionResult = await nSQL('hellodb')
    .model([
      {key: 'id', type: 'int', props: ['pk', 'ai']},
      {key: 'date', type: 'string'/*, props: ['idx']*/}, // uncomment the secondary index to compare timings
      {key: 'description', type: 'string'},
      {key: 'done', type: 'int'}
    ])
    .config({
      mode: 'PERM'
    })
    .connect()

  const numInserts = 10000
  const inserts = [] 
  for (let i = 0; i < numInserts; i++) {
    inserts.push(nSQL('hellodb').query('upsert', { date: new Date(), description: `Item: ${i}`, done: Math.round(Math.random()) }).exec())
  }

  // Time inserts
  resetTimer()
  const addResult = await Promise.all(inserts)
  displayDurationFor(`${numInserts} inserts`)
}

start()

Question:

nSQL("users")
.model(userModel)
.rowFilter(function(row) {
   return row;
})
  .connect()
  .then(
    nSQL("users")
      .query("upsert", { nickname: "billy" })
      .exec()
  );

It shows:

TypeError: nSQL(...).model(...).rowFilter is not a function
    at Object.<anonymous> (/Users/wangzhang/git/stcpd-node-app/src/nano-sql3.js:187:4)
    at Module._compile (module.js:643:30)
    at Object.Module._extensions..js (module.js:654:10)
    at Module.load (module.js:556:32)
    at tryModuleLoad (module.js:499:12)
    at Function.Module._load (module.js:491:3)
    at Function.Module.runMain (module.js:684:10)
    at startup (bootstrap_node.js:187:16)
    at bootstrap_node.js:608:3

Question/Hotfix - where query "OR"

I was curious as to why you made this alteration (the code snippet below). I assume since it's commented out you have the intention of revisiting this code. However, in the meantime I created a hotfix to restore the previous nature of multiple where "OR" queries as well as a test to ensure it's action. Not sure how you want to handle this issue so I didn't pull a pr. I've linked the hotfix branch so you can check out the code change but it's very minor.

Hotfix branch: hotfix-where

https://github.com/ClickSimply/Nano-SQL/blob/d553e979f7164e5e9b152a806bdd78ca449e8f33/lib/database/query.js#L1425-L1459

Performance question - loading large(ish) dataset

(Not sure if there is a mailing list or if you take questions here on GitHub as issues. Lemme know if I should move this somewhere else! --billy)

I have a dataset with about 300K records, and I'd like to use Nano-SQL to do some SQL magic on it, inside a browser app. Inserting the records is taking forever; I let it run for 15 minutes and it still wasn't complete. Surely I'm doing something wrong.

What's the best way to quickly import a few hundred thousand rows of data, and how long should I expect that to take?

  • The data is an array of objects, each with about ten properties/columns. Just simple text and float fields, nothing fancy
  • I tried creating the model with each column specified, and added a uuid auto-generated PK, as well as a few IDX columns.
  • That took forever, so I tried removing all the IDX'es as a test
  • That took forever, so I also tried {key: '*', type:'*'}, with no better luck
  • I tried using query('upsert') on each row, as well as .loadJS() on the entire array
  • Testing on latest Chrome and Firefox, using a very recent ultrabook with Win10.

Is my dataset simply too large for nano-sql, or have I just missed something really dumb?

type float doesn't allow 0 or 0.0 on loadJS

I have a model set up with one of the fields like:

{key:'blockStartTime', type: 'float', default: 0},

When I load JSON data from a file using nSQL().loadJS the field above is eliminated from the data if the value is set to either 0 or even 0.0. If I change it to something like 0.001 it works. I also tried changing the default value to 0.0.

Offline use with Sync to Server

I just found Nano-SQL, congratulations on your work, it looks interesting. Oh and good Docs.

I am especially interested in a Database which works both in the Browser and on the Server with synchronization between them. The Browser app needs to be able to work offline and sync to the server when back online. ie. Eventual consistency.

PouchDB/CouchDB does this, however I don't want to use these.

This is for my Web Knowledge Base app Clibu which currently uses MongoDB on the server and Dexie.js, in a limited way in the Browser. It doesn't have the sync/offline capability I am referring to.

Clibu can also be used On Premise, which requires the user to install MongoDB. An embedded DB would be nice in this scenario.

No table _util found when using nSQLiteAdapter

When running this basic gist
https://gist.github.com/plentylife/205214044fc38758c5d4a2bd99e6e72e

I get this stack trace

Error: No table _util found!
nSQLiteAdapter._chkTable (node_modules/nano-sqlite/index.js:62:19)
nSQLiteAdapter.read (node_modules/nano-sqlite/index.js:134:53)
node_modules/nano-sql/lib/database/storage.js:421:47
node_modules/nano-sql/lib/utilities.js:83:13
node_modules/nano-sql/lib/utilities.js:82:16
Object.exports.fastALL (node_modules/nano-sql/lib/utilities.js:81:46)
_NanoSQLStorage._read (node_modules/nano-sql/lib/database/storage.js:420:29)
_RowSelection._selectRowsByIndex (node_modules/nano-sql/lib/database/query.js:1207:20)

I've tried everything I can think of. Please help.

Random Select with Limit

is it possible to perform a selection-bound query with N elements randomly selected in the table?

query with limit(5) returns more than 5 elements

Hello,

I am testing how nanoSQL behaves and I found something that it is not what I am expecting, so I wanted to ask if I am wrong or there is a bug. I have done a codepen:

https://codepen.io/anon/pen/bKEqbM?editors=0010#anon-login

My idea was to check whether an observable was invoked when an inserted element affect the result query (in this case the "second page"):

nSQL().observable( ( ) =>
nSQL("users").query('select').limit( 5 ).offset( 5 ).emit() )
.subscribe( update ) ;

And then I run some simple code that inserts elements in the database.

So I run into these 2 issues:
1- The observable is called every time the database is modified. I can live with it, but then I see no much difference between nSQL().on("change") and .observable.
2- The number of rows (8) is bigger than the limit set (5)

Thank you!

Question: transaction

Hi, recently discovered nano-sql and really loving it so far.

I am wondering how to use transactions at the application level (cannot find it in documentation) ?

Question: how to select all fields when query

Not found on NanoSQL documentation, so I try "*",
my views is

[
  {
    name: "get_user_by_id",
    args: ["id:int"],
    call: function(args, db) {
      return db
        .query("select", ["id",UPPER(username) AS Customer, "*"])
        .where(["id", "=", args.id])
        .exec();
    }
  }

it shows

[ { id: 1,Customer: 'TOM', '*': undefined } ]

eventData.changedRows returning last obj in history on undo rather than history point

Question: Shouldn't eventData.changedRows[0] return the new state of any changed rows via the changed history point when undo/redo is triggered?

I have an on("change") function set up that listens for changes to a table. I have an undoing and a redoing variable, the values of which are tied to undo and redo buttons.

Note the 2 console.log's in my on("change") function. You can see that eventData.changedRows[0] always returns the latest point in history rather than the current history point after undo/redo.

I'm getting around that right now by running a query to retrieve the changed row which does return the current history point so I'm able to get it to work. Just wondered if that was your intention for this.

Below is a screenshot of the console.log's highlighted.

(NOTE: Just as a FYI, .settings.activeSelectors.options[0] is the field I'm changing in order to test to keep my log's a little more simplified. In the first history point I assigned "testing one" as the value and in the next assigned "testing one two" )

nSQL("aniblocks").on('change',function(eventData) {

  getHistory();

  switch(eventData.changeType) {
    case 'modified':
      if(eventData.changedRows.length > 0) {

        // eventData.changedRows[0] contains data from the latest history point
        console.log('row: ',eventData.changedRows[0].settings.activeSelectors.options[0]);

        if( undoing || redoing ) {
          db.query('select').where(['id','=', eventData.changedRows[0].id]).exec()
          .then(function(result,db){

            // result contains data from the current history point after an undo/redo
            console.log(result[0].settings.activeSelectors.options[0]);
           
            // doing stuff with it here
            animationsMgr.animateFromDB(result[0]);
            resetDoingVars();
          })
        }
      } else {
        // console.log('nothing changed');
      }
    break;
    case 'deleted':
      console.log('deleted');
    break;
    case 'inserted':
      console.log('inserted');
    break;
  }
});

screen shot 2017-05-10 at 7 06 53 am

Understanding upsert update

I'm able to successfully create records but not able to update. Inspecting the query args I can see that the where data is correct and can also confirm that the record I'm trying to update matches the query.

I have a feeling it's the way I have this set up. Here's what I have. I've removed some of the obj properties to keep it pithy:

SomeSQL('aniblocks')
.model([
  {key:'id',type:'uuid',props:['pk']},
  {key:'blockID', type: 'string'},
  {key:'title',type:'string'},
  {key:'timelineData',type:'map'}
])
.actions([
  {
    name: 'add_new_block',
    args: ['aniblock:map'],
    call: function(args,db) {
      return db.query('upsert',args.aniblock).exec();
    }
  },
  {
    name: 'update_block',
    args: ['aniblock:map'],
    call: function(args,db) {
      var D = args.aniblock;
      return db.query('upsert',{
        title:D.title,
        timelineData:D.timelineData
      }).where(['blockID','=',D.blockID]).exec();
    }
  }
])
.views([
  {
    name: 'get_block_by_blockID',
    args: ['blockID:string'],
    call: function(args,db) {
      return db.query('select').where(['blockID','=', args.blockID]).exec();
    }
  },
  {
    name: 'list_all_blocks',
    args: ['blockID:string'],
    call: function(args,db) {
      return db.query('select').exec();
    }
  }
])
.on('error', function(eventData){
  console.log(eventData);
});

function addNewBlock(obj) {
  SomeSQL()
  .connect()
  .then(function(result, db) {
      db.doAction('add_new_block',{aniblock:{
        id: null,
        blockID: obj.blockID,
        title: obj.title,
        timelineData: obj.timelineData
      }}).then(function(result, db) {
          console.log(result)
          return db.getView('get_block_by_blockID',obj);
      }).then(function(result, db) {
          console.log(result)
      }).on('error', function(eventData){
         console.log(eventData);
      });
  });
}

function updateBlock(obj) {
  SomeSQL()
  .connect()
  .then(function(result, db) {
      db.doAction('update_block',{aniblock:{
        blockID: obj.blockID,
        title: obj.title,
        timelineData: obj.timelineData
      }}).then(function(result, db) {
          return db.getView('get_block_by_blockID',obj);
      }).then(function(result, db) {
          logAll();
      }).on('error', function(eventData){
         console.log(eventData);
      });
  })
}

function logAll() {
  SomeSQL().connect()
  .then(function(result,db) {
    db.getView('list_all_blocks');
  })
  .then(function(result, db){
    console.log(result);
  });
}

I'm adding screenshots of a session so you can see what's happening. Successfully ran addNewBlock - note the blockID value:

screen shot 2017-03-05 at 12 34 30 pm

You can see that the result was success.
Then ran updateBlock. You can see here that the blockID is a match:

screen shot 2017-03-05 at 12 35 27 pm

From the same update instance here's showing the args object:

screen shot 2017-03-05 at 12 36 20 pm

Same instance showing the matching value for 'where':

screen shot 2017-03-05 at 12 36 31 pm

Back in the updateBlock method showing the result that nothing was modified:

screen shot 2017-03-05 at 12 37 00 pm

So, I'm sure I have everything out of place. Hoping you can tell me what I am I doing wrong.

undo redo on change listener

Should the change listener receive the the id of, or an object containing the record that was reset as a result of an undo or redo? Seems like that would be really great info to have. Currently I'm having to do this:

An undo button listener:

$('#undo-btn').on("click", function(){
  SomeSQL().extend("<").then(function(response) {
    if( response ) {
      resetOnStateChange();
    }
    console.log(response)
  });
  SomeSQL().extend("?").then(function(response) {
    console.log(response)
  });
})

The resetOnStateChange function snags all the records because I don't know what record to get:

  SomeSQL('aniblocks').getView('list_all_blocks')
  .then(function(result, db){
    console.log(result);
    // do stuff with the data
  });
}

I have a change listener like:

SomeSQL("aniblocks").on('change',function(eventData) {
  console.log(eventData);
})

but it returns a set of empty arrays..
screen shot 2017-03-06 at 12 39 30 pm

Issue auto generate id int (websql)

Hi

I am having a problem after entering 10 records, the next record is not entered and 10 is updated with data.

I think problem is in using the sort () function in the connect method, it is sorting pks alphabetically, the system needs to be numerically.

In the ECMAscript specification (the normative reference for the generic Javascript), ECMA-262, 3rd ed., section 15.4.4.11, the default sort order is lexicographical, although they don't come out and say it, instead giving the steps for a conceptual sort function that calls the given compare function if necessary, otherwise comparing the arguments when converted to strings:

_WebSQLStore.prototype.connect = function (complete) {
        var _this = this;
        this._db = window.openDatabase(this._id, "1.0", this._id, this._size || utilities_1.isAndroid ? 5000000 : 1);
        utilities_1.fastALL(Object.keys(this._pkKey), function (table, i, nextKey) {
            _this._sql(true, "CREATE TABLE IF NOT EXISTS " + table + " (id BLOB PRIMARY KEY UNIQUE, data TEXT)", [], function () {
                _this._sql(false, "SELECT id FROM " + table, [], function (result) {
                    var idx = [];
                    for (var i_1 = 0; i_1 < result.rows.length; i_1++) {
                        idx.push(result.rows.item(i_1).id);
                    }
                    // SQLite doesn't sort primary keys, but the system depends on sorted primary keys
                    idx = idx.sort();
                    _this._dbIndex[table].set(idx);
                    nextKey();
                });
            });
        }).then(complete);
    };

The solution is to pass a custom function in sort ()

idx = idx.sort(function(a,b){return a - b});

store existing JSON block?

Thank you for this awesome node module! I am most interested in using this as a tool for undo/redo. I'm implementing it in an Electron app that uses Greensock's animation platform - GSAP. In an effort to be able to undo/redo as well as have all the data necessary to save files for later use, I've had to utilize Circular JSON es6 to save the data from the timeline to JSON. I noticed in your code that the available types for a collection automatically attempt to convert to JSON...

"array": JSON.parse(JSON.stringify(val || [])),
"map": JSON.parse(JSON.stringify(val || {})),

I can't store the timelines using this because it would create a Circular issue. So, I'm using circular-json-es6 to convert to JSON safely and then want to store that. Are there any plans to add a blob type?

Thanks again!

undo/redo returning false

Hi, Scott. I'm using the latest version. In an earlier version I had set up 2 buttons like this:

$('#undo-btn').on("click", function(){
  nSQL().extend("<").then(function(response) {
    console.log(response) //<= If this is true, an undo action was done.  If false, nothing was done.
    if(response) {undoing = true};
  });
})

$('#redo-btn').on("click", function(){
  nSQL().extend(">").then(function(response) {
    console.log(response) //<= If this is true, a redo action was done.  If false, nothing was done.
    if(response) {redoing = true};
  });
})

These were working great prior to updating to the latest. I can confirm that changes have been made to rows in the (only) table in the database. This is logging the correct values on update...

nSQL("aniblocks").on('change',function(eventData) {
  console.log('modified: ',eventData.changedRows);
});

I set a breakpoint on nSQL().extend("<").then(function(response) { and took some screenshots for you to peruse.
screen shot 2017-05-02 at 12 55 13 pm

screen shot 2017-05-02 at 12 55 37 pm

screen shot 2017-05-02 at 12 56 21 pm

screen shot 2017-05-02 at 12 56 51 pm

screen shot 2017-05-02 at 12 57 24 pm

screen shot 2017-05-02 at 1 00 01 pm

screen shot 2017-05-02 at 1 00 23 pm

Set Database Path ?

How would you set the database path with this library (using LVL) ? Currently its using the project directory. I would want to bundle this.

Inserts fail after 16 items in attached code sample

To reproduce

Run the following code more than 16 times – e.g., if called index.js: (node index.js):

const { nSQL } = require('nano-sql')

async function start () {
  const connectionResult = await nSQL('hellodb')
    .model([
      {key: 'id', type: 'int', props: ['pk', 'ai']},
      {key: 'date', type: 'string'},
      {key: 'description', type: 'string'},
      {key: 'done', type: 'int'}
    ])
    .config({
      mode: 'PERM'
    })
    .connect()

  const addResult = await Promise.all([
    nSQL('hellodb').query('upsert', { date: new Date(), description: 'First item', done: 0 }).exec(),
    nSQL('hellodb').query('upsert', { date: new Date(), description: 'Second item', done: 1 }).exec(),
    nSQL('hellodb').query('upsert', { date: new Date(), description: 'Third item', done: 0 }).exec()
  ])

  console.log(addResult)

  const selectResult = await nSQL('hellodb').query('select').where(['done', '>', 0]).exec()

  console.log(selectResult)
  console.log(selectResult.length)
}

start()

What should happen

The length of the results from the select query should increase steadily.

What actually happens

New inserts begin to fail after 16 copies of the second item have been inserted. (Inserts continue to work for the first and third items.)

Also, the generated IDs are rather odd. I’m not sure if this is for the lexicographical sorting that LevelDB uses:

[{ date: 'Mon May 21 2018 15:42:06 GMT+0100 (IST)',
    description: 'Second item',
    done: 1,
    id: 2 },
{ date: 'Mon May 21 2018 15:42:25 GMT+0100 (IST)',
    description: 'Second item',
    done: 1,
    id: 32 },
{ date: 'Mon May 21 2018 15:42:30 GMT+0100 (IST)',
    description: 'Second item',
    done: 1,
    id: 332 },
 
{ date: 'Mon May 21 2018 15:42:44 GMT+0100 (IST)',
    description: 'Second item',
    done: 1,
    id: 3333333333333332 }]

Note: I was initially doing this in a “transaction” block and seeing the same results but refactored to remove the “transaction” based on #35 (comment) to rule that out as a factor.

Question: how to search some words at specified field

example:
My views code is:

{
    name: "search_specific_fields",
    args: ["q:string", "fields:string"],
    call: (args, db) => {
      let valArr = [];
      args.fields.split(",").forEach((item, index) => {
        valArr[index] = [item, "LIKE", args.q];
      });
      let result = [];
      for (let i = 0; i < valArr.length; i++) {
        result.push(valArr[i]);
        result.push("AND");
      }
      result.pop();
      return db
        .query("select", ["id", "UPPER(username)"])
        .where(result)
        .exec();
    }
  }

And source codes is:

nSQL("users").getView("search_specific_fields", {
    q: "richard",
    fields: "username,nickname"
  })

modeling a new table in an existing db got issue

First I do this:

nSQL("table1")
.model([
{ something}
])
.config({
id: "myDB1",
mode: "PERM",
history: false
})
.connect();

then I do
nSQL("table2")
.model([
{ something}
])
.config({
id: "myDB1",
mode: "PERM",
history: false
})
.connect();

An error will occur saying something like could not find store object. Then I have to change db to a new name. The other way is delete the indexedDB in chrome dev and build the tables again.

Indexes consistently fail to be created correctly

NanoSQL is looking great, congrats. I've plugged it into my app to replace Dexie because it was significantly quicker in testing.

However, I'm upserting some records into IndexedDB through NanoSQL and I'm finding that some rows are not being indexed as expected.

In my test, I have a collection of 4 records that I'm upserting and every time I perform the operation, the 3rd record, no matter what it is, is not indexed.

Here's the model:

  {
    "key": "id",
    "type": "uuid",
    "props": [
      "pk"
    ]
  },
  {
    "key": "client_id",
    "type": "uuid",
    "props": [
      "idx"
    ]
  },
  {
    "key": "client_location_id",
    "type": "uuid",
    "props": [
      "idx"
    ]
  },
  {
    "key": "personnel_id",
    "type": "string",
    "props": [
      "idx"
    ]
  },
  {
    "key": "created_date",
    "type": "string",
    "props": []
  },
  {
    "key": "modified_date",
    "type": "string",
    "props": []
  }
]

This is the data:

  {
    'id': '1',
    'personnel_id': '2596'
  },
  {
    'id': '2',
    'personnel_id': '2596'
  },
  {
    'id': '3',
    'personnel_id': '2596'
  },
  {
    'id': '4',
    'personnel_id': '2596'
  }                                        
];

And this is the code containing the upsert operation:

data.forEach(row => {
  nSQL('personnelLocation').query('upsert', row).exec();             
});

Here's the resulting index in _personnelLocation_idx_personnel_id:

rows: Array(3)
  0: "1"
  1: "2"
  2: "4"

Am I losing my marbles or is there some bizarre issue at play here?

How is the immutability implemented?

Just a quick question, I'm looking for a database that provides immutability where every modification or batch of modifications returns a new database, while the old database can still be alive until it is garbage collected.

There are many ways to implement this, but I'm looking for one that does it efficiently (preferably through path-copying strategy or some combination of fatnode... etc) and performs a sort of structure sharing (so that the new copy of the DB is not a fully copy, but a constant time factor increase to the existing time required for a database insertion anyway.

The other major thing is that I need to store literal & class instantiated objects into the database, and then be able the query the database efficiently. So that means the database must be able to index objects.

Previously I was looking into datascript, which achieved immutability efficiently, but it doesn't support storing literal objects properly (because it does a copy on literal objects (but not class instantiated objects or objects with no constructor property)), and it doesn't index objects either, which required me to hack into it an object-tagging system and making sure all objects being inserted were wrapped in a class instantiated reference object, thus adding unnecessary complexity to the overall solution.


I've just had a quick read of the history feature, but it doesn't really match exactly what I need. Explicit clearing of history doesn't fit my usecase, I need to be able to have a reference to a point-in-time snapshot of the database (similar to how datascript provides a single value), and then when that reference is dropped, the JS GC system would remove all unreachable objects. This feature would be useful, as then this database can be combined with other data structures to create more composite immutable data structures.

Performance issue

I see you've renamed and have been fleshing out the new docs. You've been busy!

I did update to whatever the version was a day or 2 ago. package.json has: ^0.4.0

My question has to do with performance. I saw something regarding turning on/off history to speed things up whenever you don't need a history point. I also remember seeing something about another function that could help with performance.

I am importing data on a file open and history isn't need for that. Also, I'd love to find a way to order when history saves trigger so that the app doesn't freeze during the time it's updating.

One other question, would performance improve if I persisted data to InnoDB?

Thanks

Question: Migration after database model update

Hi Scott,

I'm bothering you again :)

This is an experience of every project out there. You make a database model. Then you change it.
In the case of nano-sql and indexeddb, it creates a new database.

The question is, how do we get the old data into the new database?

I found this doc, https://docs.nanosql.io/adapters/transfer, but it doesn't exactly answer my question.

Best wishes,
Anton

PS. For me this is a real problem, but can be simplified to this: how do I extract one table from the old database and put it in the new database, if the table model is exactly the same?

connection exception

Hi there,

function nanoTest() {
nSQL('users') // "users" is our table name.
.model([ // Declare data model
{key:'id',type:'int'},
{key:'name',type:'string'},
{key:'age', type:'int'}
])
.connect() // Init the data store for usage. (only need to do this once)
.then(function(result) {
return nSQL().query('upsert', [
{id:1, name:"bill", age: 20},
{id:2, name:"tom", age: 30},
{id:3, name:"john", age: 33}
]).exec();
});

nSQL('users2') //  "users" is our table name.
.model([ // Declare data model
         {key:'id',type:'int'}, 
         {key:'tel',type:'string'},
         {key:'cell', type:'string'}
     ])
.connect() // Init the data store for usage. (only need to do this once)
.then(function(result2) {
	return nSQL().query('upsert', [
       {id:1, tel:"02-111-1111", cell: '010-1111-1111'},
       {id:2, tel:"02-345-3456", cell: '010-1111-1112'},
       {id:3, tel:"02-123-3456", cell: '010-1111-1112'}
    ]).exec();
});

// 데이터 조인
nSQL("users")
.query("select",["users2.id", "users.name","users.age","users2.tel","users2.cell"])
.where(["users.id",">",0])
.join({
    type:"inner", // Supported join types are left, inner, right, cross and outer.
    table: "users2",
    where: ["users.id", "=", "users2.id"] // any valid WHERE statement works here
}).exec().then(function(rows) {console.log(rows);

});
}

But I saw a exception.
uncaught exception: nSQL: Database not connected, can't do a query!

I tested on firefox.

outdated cdn link

In README.md, the cdn link is outdated and won't work with the example code.

<script src="https://cdn.jsdelivr.net/npm/[email protected]/dist/nano-sql.min.js"></script>

Really weird behaviour, perhaps caching?

I'm using nanosql in my project to create a library that I then include into a different project.

When I do the tests on my library everything works fine. But when I include it into the other project there's this really weird issue. The setup is as follows:

  • I use react-nanosql to react to onChange
  • I have a table TABLE with COLUMN with a single row
  • Within onChange I use a select query to get the new value
  • The COLUMN value in that row starts of with 0

When I update the row, I can see a console.log showing that COLUMN has been changed to 1
The event passed into onChange is showing the same, with query.state being complete
But the select within the onChange is showing the old value! It still shows 0!

This is where it gets really weird.
If the mode is set to TEMP everything works.
If the mode is set to PERM, but I'm testing just my library everything works.

The only time this doesn't work, is when I use the library within the other project, and the mode is PERM.

This is very hairy.

PS. The other project uses redux-persist with localforage

Does NanoSQL work with React Native

Hello,

(not sure if this is the right place to ask)

I'm looking at using NanoSQL, and I see that it works really well with React.

My question is, can it be used with React Native?

Best,
Anton

new error related to leveldown

Ran build in my Electron project and now getting this error:

Uncaught Error: The module '/Users/username-was-here/Documents/project-name-was-here/build/node_modules/leveldown/build/Release/leveldown.node'
was compiled against a different Node.js version using
NODE_MODULE_VERSION 48. This version of Node.js requires
NODE_MODULE_VERSION 53. Please try re-compiling or re-installing
the module (for instance, using npm rebuild ornpm install).

This is the first time I'm seeing this.

Using:
node v6.9.5
npm version 3.10.10

Went back to 0.4.5 and the error went away. Would love to up to a later version.

Default might not be working

I'm not sure if I'm doing everything correctly.

I have a table with this key in the model,
{key: 'communitySharePoints', type: 'int', default: 0}

When I get a record from it using,
return nSQL(AGENT_WALLET_TABLE).query('select', ['agentId', 'communitySharePoints']) .where(['communityId', '=', communityId]).exec()

The agentId is set, but communitySharePoints is undefined.

Perhaps I'm doing something wrong?
When I upsert the record, I don't have communitySharePoints in the object at all. Maybe that's the issue?

Bug: < on indexed column

This is a kind of funny bug.

I have a column like this
{key: 'lastNotification', type: 'number', props: ['idx']}
and a query like this
nSQL(TABLE).query('select').where(['lastNotification', '<', before]).exec()

The above does not work.
If I remove 'idx' from the column, it works
OR if I switch '<' to '<=' it works

Question regarding passing additional args

I have a need to pass additional data (not in an nSQL table) to be used, for example, after a result is pulled using a query. I have the following example...

var extraData = {var1: "one", var2: "two", var3: "three"};

nSQL('aniblocks')
  .query("select")
  .where(["id","=",block.settings.pk])
  .exec()
  .then(function(result,db){
    // use data from extraData here along with the result
  })

Is there a convenient way to pass additional data? Currently handling it by creating a function and calling it like:

var extraData = {var1: "one", var2: "two", var3: "three"};
function getExtraData() {
  return extraData;
}
nSQL('aniblocks')
  .query("select")
  .where(["id","=",block.settings.pk])
  .exec()
  .then(function(result,db){
    var data = getExtraData();
    // now work with the data
  })

Question:using join like this:

nSQL("users").query("select", ["users.id", "UPPER(users.name) AS users.name", "orders.date","orders.total"])
.join({
   type: "left",
   table: "orders",
   where: ["users.id","=","orders.userID"]
})
.where(["orders.total",">",200])
.orderBy({"orders.date":"asc"})
.exec().then..

it shows

[ { 'users.id': 1,
    'users.name': 'UNDEFINED',...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.