Giter VIP home page Giter VIP logo

ingestdb's Introduction

IngestDB

A peer-to-peer database for dat:// applications. How it works

Example

Setup in the browser:

const Ingest = require('ingestdb')
var db = new Ingest('social-profiles')

Setup in node:

const DatArchive = require('node-dat-archive')
const Ingest = require('ingestdb')
var db = new Ingest('social-profiles', {DatArchive})

Define your schema:

db.schema({
  version: 1,
  broadcasts: {
    primaryKey: 'createdAt',
    index: [
      'createdAt',
      '_origin+createdAt' // compound index. '_origin' is an autogenerated attribute which represents the URL of the authoring archive
    ],
    validator: record => {
      assert(typeof record.text === 'string')
      assert(typeof record.createdAt === 'number')
      return record
    }
  },
  likes: {
    primaryKey: 'createdAt',
    index: 'targetUrl',
    validator: record => {
      assert(typeof record.targetUrl === 'string')
      return record
    }
  },
  profile: {
    singular: true,
    index: 'name',
    validator: record => {
      assert(typeof record.name === 'string')
      return {
        name: record.name,
        description: isString(record.description) ? record.description : '',
        avatarUrl: isString(record.avatarUrl) ? record.avatarUrl : ''
      }
    }
  }
})

Then open the DB:

await db.open()

Next we add source archives to be ingested (added ot the dataset). The source archives are persisted in IndexedDB, so this doesn't have to be done every run.

await db.addArchives([alicesUrl, bobsUrl, carlasDatArchive])

Now we can begin querying the database for records.

// get the first profile record where name === 'bob'
var bobProfile = await db.profiles.get('name', 'bob')

// get all profile records which match this query
var bobProfiles = await db.profiles
  .where('name')
  .equalsIgnoreCase('bob')
  .toArray()

// get the 30 latest broadcasts from all source archives
var recentBroadcasts = await db.broadcasts
  .orderBy('createdAt')
  .reverse() // most recent first
  .limit(30)
  .toArray()

// get the 30 latest broadcasts by a specific archive (bob)
// - this uses a compound index to filter by origin, and then sort by createdAt
var bobsRecentBroadcasts = await db.broadcasts
  .where('_origin+createdAt')
  .between([bobsUrl, ''], [bobsUrl, '\uffff'])
  .reverse() // most recent first
  .limit(30)
  .toArray()

// get the # of likes for a broadcast
var numLikes = await db.likes
  .where('targetUrl').equals(bobsRecentBroadcasts[0]._url) // _url is an autogenerated attribute which represents the URL of the record
  .count()

We can also use Ingest to create, modify, and delete records (and their matching files).

// update bob's name
await db.profiles.update(bobsUrl, {name: 'robert'})

// publish a new broadcast for bob
var broadcastUrl = await db.broadcasts.add(bobsUrl, {
  text: 'Hello!',
  createdAt: Date.now()
})

// modify the broadcast
await db.broadcasts.update(broadcastUrl, {text: 'Hello world!'})

// like the broadcast
await db.likes.add(bobsUrl, {
  targetUrl: broadcastUrl,
  createdAt: Date.now()
})

// delete the broadcast
await db.broadcasts.delete(broadcastUrl)

// delete all likes on the broadcast (that we control)
await db.likes
  .where({targetUrl: broadcastUrl})
  .delete()

TODOs

Ingest is still in development.

  • Indexer
  • Core query engine
  • Persisted tables and table reindex on schema change
  • Mutation methods (add/update/delete)
  • Events
  • Multikey indexes
  • More efficient key queries (currently loads full record from disk - could just load the keys)
  • Validation: filename must match primaryKey on non-singular tables
  • Support for .or() queries
  • Complete documentation

API quick reference

var db = new IngestDB(name)
IngestDB.list() => Promise<Void>
IngestDB.delete(name) => Promise<Void>
db.open() => Promise<Void>
db.close() => Promise<Void>
db.schema(Object) => Promise<Void>
db.addArchive(url|DatArchive, {prepare: Boolean}) => Promise<Void>
db.addArchives(Array<url|DatArchive>, {prepare: Boolean}) => Promise<Void>
db.removeArchive(url|DatArchive) => Promise<Void>
db.prepareArchive(url|DatArchive)
db.listArchives() => Promise<url>
db 'open' ()
db 'open-failed' (error)
db 'versionchange' ()
db 'indexes-updated' (archive, archiveVersion)

db.{table} => IngestTable
IngestTable#add(archive, record) => Promise<url>
IngestTable#count() => Promise<Number>
IngestTable#delete(url) => Promise<url>
IngestTable#each(Function) => Promise<Void>
IngestTable#filter(Function) => IngestQuery
IngestTable#get(url) => Promise<Object>
IngestTable#get(archive) => Promise<Object>
IngestTable#get(archive, key) => Promise<Object>
IngestTable#get(index, value) => Promise<Object>
IngestTable#isRecordFile(String) => Boolean
IngestTable#limit(Number) => IngestQuery
IngestTable#listRecordFiles(Archive) => Promise<Object>
IngestTable#name => String
IngestTable#offset(Number) => IngestQuery
IngestTable#orderBy(index) => IngestQuery
IngestTable#put(url, record) => Promise<url>
IngestTable#query() => IngestQuery
IngestTable#reverse() => IngestQuery
IngestTable#schema => Object
IngestTable#toArray() => Promise<Array>
IngestTable#toCollection() => IngestQuery
IngestTable#update(record) => Promise<Number>
IngestTable#update(url, updates|function) => Promise<Number>
IngestTable#update(archive, updates|function) => Promise<Number>
IngestTable#update(archive, key, updates|function) => Promise<Number>
IngestTable#upsert(url|archive, record|function) => Promise<Void | url>
IngestTable#where(index) => IngestWhereClause
IngestTable 'index-updated' (archive, archiveVersion)

IngestWhereClause#above(lowerBound) => IngestQuery
IngestWhereClause#aboveOrEqual(lowerBound) => IngestQuery
IngestWhereClause#anyOf(Array|...args) => IngestQuery
IngestWhereClause#anyOfIgnoreCase(Array|...args) => IngestQuery
IngestWhereClause#below(upperBound) => IngestQuery
IngestWhereClause#belowOrEqual(upperBound) => IngestQuery
IngestWhereClause#between(lowerBound, upperBound, {includeUpper, includeLower}) => IngestQuery
IngestWhereClause#equals(value) => IngestQuery
IngestWhereClause#equalsIgnoreCase(value) => IngestQuery
IngestWhereClause#noneOf(Array|...args) => IngestQuery
IngestWhereClause#notEqual(value) => IngestQuery
IngestWhereClause#startsWith(value) => IngestQuery
IngestWhereClause#startsWithAnyOf(Array|...args) => IngestQuery
IngestWhereClause#startsWithAnyOfIgnoreCase(Array|...args) => IngestQuery
IngestWhereClause#startsWithIgnoreCase(value) => IngestQuery

IngestQuery#clone() => IngestQuery
IngestQuery#count() => Promise<Number>
IngestQuery#delete() => Promise<Number>
IngestQuery#each(Function) => Promise<Void>
IngestQuery#eachKey(Function) => Promise<Void>
IngestQuery#eachUrl(Function) => Promise<Void>
IngestQuery#filter(Function) => IngestQuery
IngestQuery#first() => Promise<Object>
IngestQuery#keys() => Promise<Array<String>>
IngestQuery#last() => Promise<Object>
IngestQuery#limit(Number) => IngestQuery
IngestQuery#offset(Number) => IngestQuery
IngestQuery#or(index) => IngestWhereClause
IngestQuery#orderBy(index) => IngestQuery
IngestQuery#put(Object) => Promise<Number>
IngestQuery#urls() => Promise<Array<String>>
IngestQuery#reverse() => IngestQuery
IngestQuery#toArray() => Promise<Array<Object>>
IngestQuery#uniqueKeys() => Promise<Array<String>>
IngestQuery#until(Function) => IngestQuery
IngestQuery#update(Object|Function) => Promise<Number>
IngestQuery#where(index) => IngestWhereClause

API

db.schema(definition)

{
  version: Number, // the version # of the schema, should increment by 1 on each change

  [tableName]: {
    // is there only one record-file per archive?
    // - if true, will look for the file at `/${tableName}.json`
    // - if false, will look for files at `/${tableName}/*.json`
    singular: Boolean,

    // attribute to build filenames for newly-created records
    // ie `/${tableName}/${record[primaryKey]}.json`
    // only required if !singular
    primaryKey: String, 

    // specify which fields are indexed for querying (optional)
    // each is a keypath, see https://www.w3.org/TR/IndexedDB/#dfn-key-path
    // can specify compound indexes with a + separator in the keypath
    // eg one index               - index: 'firstName' 
    // eg two indexes             - index: ['firstName', 'lastName']
    // eg add a compound index    - index: ['firstName', 'lastName', 'firstName+lastName']
    // eg index an array's values - index: ['firstName', '*favoriteFruits']
    index: String|Array<String>,

    // validator & sanitizer (optional)
    // takes the ingested file (must be valid json)
    // returns the record to be stored
    // returns falsy or throws to not store the record
    validator: Function(Object) => Object

    // file-creator (optional)
    // takes the record
    // returns the object to be stored to the file
    // returns falsy or throws to not write the file
    toFile: Function(Object) => Object
  }
}

About validator and toFile

The validator method is called any time Ingest is given a record, either due to reading it from an archive, or because the application called add() or update() with new record data.

The toFile method is only called when the application calls add() or update() with new record data. It is called after validator. Its main purpose is to reduce the data saved to the file.

How it works

IngestDB abstracts over the DatArchive API to provide a simple database-like interface. It is inspired by Dexie.js and built using LevelDB. (In the browser, it runs on IndexedDB using level.js.

Ingest works by scanning a set of source archives for files that match a path pattern. Those files are indexed ("ingested") so that they can be queried easily. Ingest also provides a simple interface for adding, editing, and removing records on the archives that the local user owns.

Ingest sits on top of Dat archives. It duplicates the data it's handling into IndexedDB, and that duplicated data acts as a throwaway cache -- it can be reconstructed at any time from the Dat archives.

Ingest treats individual files in the Dat archive as individual records in a table. As a result, there's a direct mapping for each table to a folder of .json files. For instance, if you had a 'tweets' table, it would map to the /tweets/*.json files. Ingest's mutators, such as put or add or update, simply write those json files. Ingest's readers & query-ers, such as get() or where(), read from the IndexedDB cache.

Ingest watches its source archives for changes to the json files. When they change, it reads them and updates IndexedDB, thus the query results stay up-to-date. The flow is, roughly: put() -> archive/tweets/12345.json -> indexer -> indexeddb -> get().

ingestdb's People

Contributors

ansont avatar pfrazee avatar

Watchers

 avatar  avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.