Giter VIP home page Giter VIP logo

field-manual's Introduction

Notice: The field manual is outdated. Code examples may show use of deprecated api's and result in errors. Consult the current documentation of the packages being used.

The OrbitDB Field Manual

An end-to-end tutorial, an in-depth look at OrbitDB's architecture, and even some philosophical musings about decentralization and the distributed industry. From the creators of OrbitDB.

Gitter Matrix

What's in the book?

The book opens with an introduction that gives an overview of the promises and risks of the distributed space, and describes OrbitDB and its use cases at a high level.

The tutorial begins by guiding you through building a JavaScript application from scratch. You will work through: installation and database creation; managing and structuring your data; networking, communicating, and sharing data in a peer-to-peer fashion; and finally managing distributed identity and access to the databases. By the end of the tutorial, you should have everything you need.

Next, Thinking Peer to Peer is a collection of essays that approach peer-to-peer engineering from a more intellectual and philosophical perspective. It is light on code and heavy on ideas. It is also open for community members to contribute essays of their own, pending editorial review.

Then, The Architecture of OrbitDB covers in-depth, in a more reference style, the structured and architecture of OrbitDB. It includes a description of ipfs-log, the core of OrbitDB, the data stores and finally how the orbit-db library brings it all together into a single, cohesive package that works both in the browser and on the command line.

What comes next? provides some guidance and suggestions about subsequent topics you should explore. This section serves as a launch pad to further your understanding of how our distributed future will be built.

And after all that, we have a chapter on how you might write custom Stores in OrbitDB in Customizing OrbitDB, written in the form of a second Tutorial, that is based on our first chapter.

How to read this book

While this book is best consumed by reading cover-to-cover, we understand that your time is valuable and you want to get the most out of it in the shortest amount of time. Here are some suggestions to digest the information efficiently.

If you are a technical person and want to use OrbitDB to build distributed, peer-to-peer applications, start with Part 1: The tutorial, move to Part 3: The Architecture of OrbitDB, and then read chapters from Part 2 and Part 4 as necessary to fill in any knowledge gaps you may have.

After reading Part 1, Part 3 and Part 2, you are equipped to read Part 5: Customizing OrbitDB.

If you do not want to write code, but instead want to understand peer-to-peer systems and architectures at a higher level, you should be able to get away with only reading Part 2: Thinking Peer to Peer, and then moving on to Part 4, followed by Part 3.

Please note that we may repeat ourselves in different parts of the book. This is intentional because we cannot guarantee people will read the sections of the book in order. Please skim any sections that you already feel like you understand.

Getting the book

Here you can download a copy of the book in the following free formats:

Gaps and out-of-date sections

OrbitDB is an alpha phase software in an alpha industry. A risk of this fact is, that documentation can quickly become out-of-date and be filled with errors.

It is thus vital, that everyone reading the Field Manual from OrbitDB also stay in contact with the OrbitDB Community on Matrix or Gitter for questions and reporting of issues with the Manual.

Maintainers

Mark Henderson (@aphelionz) is the maintainer and lead author of the OrbitDB field manual. However, this work is built upon the efforts of many other people:

Contributing

This is a living, community-based document, which means it is for and can include you.

Anybody can:

  1. give feedback on, or request modifications to, the tutorial;
  2. submit an essay for inclusion in the "Thinking Peer to Peer" section.

To do so you are welcome to create a pull request.

Please look at and follow the Code of Conduct.

Building

Requires Pandoc to convert markdown to other formats.

  1. Make your edits in the markdown files.
  2. npm run lint to make sure your edits meet linting standards.
  3. npm run build to populate the dist folder.
  4. Manually audit the dist output to ensure no errors were made.
  5. Create your PR!

License

The OrbitDB Field Manual is released under the Creative Commons Attribution-NonCommercial 4.0 International License by Haja Networks Oy.

CC BY-NC 4.0
Attribution-NonCommercial
CC BY-NC

field-manual's People

Contributors

aaronli7 avatar abuisman avatar aphelionz avatar csdummi avatar daihaushu avatar david0178418 avatar dependabot[bot] avatar emhane avatar ingyamilmolinar avatar joshpainter avatar julienmalard avatar justmaier avatar oslfmt avatar peterhuba avatar phillmac avatar phonofidelic avatar ricardojmendez avatar richardlitt avatar squidwardthetentacles avatar tabcat avatar tjbdev avatar trancephorm avatar vasa-develop avatar weldeasy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

field-manual's Issues

Energy efficieny for battery powered devices e.x Smartphone

I would like to discuss the energy efficiency of the orbit protocol.

  1. Does one only need one open permanent connection to the ipfs network for orbit to work?

  2. Should one need multiples connections and need to reconnect all the time, the battery of devices would drain very fast. Moreover it would use more bandwidth.

Only problem which comes to mind with content based vs location based, which holds back orbit from replacing popular chat System and protocols in the long run.

I speak of non particular devices as for IOT it would be very useful to be energy efficient as well. Please tell me right away if this issue is out of place.

FAQ: Is every 'put' to OrbitDB immediately sent to the network and persisted?

Is every 'put' to OrbitDB immediately sent to the IPFS network and when 'await orbitdb.put(key, value)' returns, can I assume that the data has been persisted to atleast one other node in addition to my node ?

When calling put or any other update operation on a database, the data is 1) saved locally and persisted to IPFS and 2) send to the network, through IPFS Pubsub, to peers who have the database open (ie. peers).

Upon calling put (or other updates), OrbitDB saves the data locally and returns. That is, the operation and its data is saved to the local node only after which put returns and asynchronously sends a message to pubsub peers. OrbitDB doesn't have a notion of confirming replication status from other peers (although this can be added on user-level) and considers operation a success upon persisting it locally. OrbitDB doesn't use consensus nor does it wait for the network to confirm operations making it an eventually consistent system.

In short: it can't be assumed that data has been replicated to the network after an update-operation call finishes (eg. put, add).

FAQ: Can I recreate the entire database on another machine based on the address?

If I have my orbitDB address, can I recreate the entire database on another machine?

A database can't be "recreated" without downloading the database from other peers. Knowing an address will allow user to open the database, which automatically connects to other peers who have the database open, and download the database which then "recreates" the database state locally, ie. replicate the database.

Replicating service?

I can see that orbitdb replicate <database> lets me replicate a single DB, is there a way to replicate a large list of databases?

If we use the orbitdb version of twitter as an example, is there a way to replicate every user's stream so that if a user goes offline before anyone else "follows" them (their orbit & ipfs nodes become inaccessible) then someone else can still follow them and retrieve their tweets from some kind of replicating service?

  1. Does something like this exist, or would I need to write it?
  2. At what scale of replication am I likely to run into scaling issues?

Thanks!

Does Orbit DB only sync after a write?

In the OrbitDB docs, I read: "In order to have the same data, ie. a query returns the same result for all peers, an OrbitDB database must be replicated between the peers. This happens automatically in OrbitDB in a way that a peer only needs to open an OrbitDB from an address and it'll start replicating the database."

However, the behavior I'm seeing while testing this locally is that when I merely open a DB, no syncing happens.

I have one IPFS server, one Orbit Database, two local instances of that database, each one with some heads that the other does not have has. I start up both instances, and no message are sent on IPFS PubSub, and both DB's remain unsynced.

If I do a write at the point that both are online, they both pickup that new head, and follow the next's back from that write. But merely opening a database connecting does not sync.

Is this intended behavior? Could this be due to my local DB's instances using the same user public key? And/or is this because I'm sharing a remote IPFS server?

Source below:

const IpfsApi = require('ipfs-api')
const OrbitDB = require('orbit-db')

const ipfs = IpfsApi('localhost', '5001')
const orbitdb = new OrbitDB(ipfs)

async function run(){
    const db = await orbitdb.open('/orbitdb/MYDBHASHHERE/test-db')
    await db.load()

    const result = db.iterator({ limit: -1 }).collect()
    console.log("Start")
    console.log(JSON.stringify(result, null, 2))

    db.events.on('replicated', (address) => {
        console.log("REPLICATED")
        console.log(db.iterator({ limit: -1 }).collect())
    })
}

run()

(Question) Drop Invalid Entries?

If I want to create a database that stores signed Ethereum messages, is there a way I can make my web application ignore/refuse to store or propagate messages that it finds to be invalid from updates from other IPFS nodes/web clients? I want to prevent spam attacks while still allowing anyone to post signed Ethereum messages to the database.

Authentication

I am developing a chat app based on OrbitDB and IPFS.
One issue I have encountered is authentication because I want for the users to be able to just use their login and password instead of copying over public and private keys between machines. One solution I came up with was include the user's orbit db key pair (encrypted using symmetric crypto) together with other info in the user's database that's read-only for everyone except the creator. Then when logging in my API would attempt to decrypt the key and if that succeeds I'd just use the key pair to get full access to the database (somehow replace OrbitDB's key pair with the one form the database). But I don't know if this won't create issues (like multiple clients having the same key pair) or if it's even possible

scaling/limits/performance

what are the practical or theoretical limits of orbit-db (or ipfs)?

how many keys can one put in the kv store?

how many values can a key have?

does having either many keys and/or values have impact on

  • read speed?
  • write speed?
  • memory

does a node need the full db to read/write?

Research hard copy "print on demand" options

This book will be available via free electronic download to ensure mass financial accessibility.

However, The OrbitDB Field Manual is meant to be exactly that: for use in the field. We need to assume that there are austere environments where people cannot (or choose not to) access the internet, or have reasons to choose not to.

Since Haja cannot be expected to manage inventory and shipment. researching options for print-on-demand hard copy purchasing will be crucial. Print on demand is also the most consistent with a decentralized philosophy since it allows people to choose from several printing vendors.

FAQ: Does orbit have a shared feed between peers where multiple peers can append to the same feed?

Copy-pasting from Gitter:

"...or, is it done more like scuttlebutt, where each peer has their own feed"

All databases (feeds) are shared between peers, so nobody "owns them" like users do in ssb (afaik). Multiple peers can append to the same db. @tyleryasaka is right in that each peer has their own copy of the db (the log) and they may have different versions between them but, as @tyleryasaka (correctly) describes, through the syncing/replication process the peers exchange "their knowledge of the db" (heads) with each other, the dbs/logs get merged. This is what the "CRDT" in ipfs-log enables. But from address/authority/ownership perspective, they all share the same feed.

Thanks @pkieltyka for the question! 👍

Replication conditions

Hi, is there any possibilities for make conditions for accepting new entry?

For example
if(newEntry != true) {
return false
} ellse {
addNewEntry
}

The main idea is, if new entry has incorrect signature, then don't add entry to own copy of public db

Let's have Orbit go full IPFS

Like above, wouldn't it be great to be able to access Orbit exclusively via IPFS? (I hope I didn't miss the link, but I looked in some places and all I've seen were HTTP ones).

If there's obstacles, could you guys list them here?

How to replicate databases?

Hey, there
I am new to orbitdb, and I think it's amazing. But I didn't see how to add peers in orbitdb. Can we get the nodes synced by default ?
Thanks in advance
Cheers ^^

Questions about data validation

hi there! I have a P2P app that i want to build and i’ve recently stumbled across OrbitDB. I’m new to all this p2p stuff and i’m confused about how data validation works using a client-side db. I’m familiar with using public key cryptography to sign and verify transactions, but how would i prevent malicious users from spamming the application if i can’t perform server-side validation? Wouldn’t i need some sort of smart contract? Thanks.

Rust version

I've been following the progress of https://github.com/libp2p/rust-libp2p, and it looks like it's ready to build a Rust IPFS implementation on top of it. Once that is in place, I'd like to build a version of OrbitDB (or something similar) in Rust. My use case is desktop applications, plugins and mobile apps. Something to compete with Realm or Couchbase Lite.

Any thoughts on how long this might take? (Porting OrbitDB, not IPFS).

Authorization Middleware Possibilities for OrbitDB

My idea is still a work in progress so bear with me. At this point from what I have seen, key signing per each peer is how granting access to store data works in OrbitDB. I am hoping to introduce a finer grained level of access. And to do that I was thinking that creating a middleware would be a good option. The idea is that metadata stores could exist to map doc, key/val, etc.. access to different users. With that middleware in place, attempts could be denied/granted based on the metadata store. This is a very rough idea and may not be feasible though with the middleware alone. I may need to also use a 3rd party service(like uPort or Auth0) to handle the storage of this kind of information.

Appreciate any thoughts or direction people in community might have on this topic.
I know that work is being done on the Dynamic Access Controllers. But from my understanding that is access at a store level. I am hoping to make it possible to do it at a lower level.

Replicating a database

Hello World!

I'm trying replicating my public database in another computer using the address "/orbitdb/QmPmh5G7kUVTCAAeRCaZmdN8zeDbMByf3oU41xA1Cx5cw9/my-db".
db = await orbitdb.docs('/orbitdb/QmPmh5G7kUVTCAAeRCaZmdN8zeDbMByf3oU41xA1Cx5cw9/my-db');

and when I try to get an existing object I got empty document .

Is it possible to replace signal server for offline-first web application LAN peer mesh with SSDP or something

I'm researching how to build a offline-first web application (PWA). It has a group of nodes running on a subnet and only one peer at a time talks to the cloud, the primary, if the primary goes down. The other peers need to become aware and since they all share the same database via orbit-db anyone can then become the primary or stay offline and continue to sync data on the lan.

It's not clear how to do this from the Ipfs docs. Really I'd just like to know if it's possible.

Orbit-db intro does say "making OrbitDB an excellent choice for ... offline-first web applications"
I suppose I could also build a signalling server into every node and if the current primary node which was also the acting signalling server went down and the peers lost contact with it another could be turned on #somehow?

[Question] Security and storage

Hello there, guys!

I am very curious about the security and storage of your DB. As far as I can see, you can give write permissions. However, I am curious about one particular scenario. Let's say, I have a central server that is the main authority. It is designed to be as unimportant and as lightweight as possible, so most of the heavy lifting is done via the p2p data sharing.

  1. The main idea initially was: give write permission to the main server only, leaving everyone basically read-only, while it is offline. However, it sounds too constraining. Are there any other possibilities or better battle-tested patterns? And, also, how secure will the whole system be?

  2. Storage. Let's imagine I am creating Twitter-like service. Each user, message, and other data are stored on the distributed DB, which is the most intuitive way of achieving that. However, it makes me doubt about how feasible this tactic will be on mobile devices, with very limited data bandwidth and RAM. So, the question is, how is data being stored on the network? Does every user download and own the whole database? Or is it smartly split-up into chunks?

  3. Does every user pin and host the database?

Sometimes I start to think that using some hand-written system, written on top of some WebRTC DHT, such as KAD.js might be overall more secure and simple to track problems in. For example, giving the main server a public-private key pair, which will allow it to sign some data pieces, that would be then stored on the client's side. However, it creates a question about data accessibility and sharing.

Thanks!

How is this different from Dat ecosystem?

The Dat project also has secure logs, key value stores and databases built on top of single and multi author append only logs, distributed over efficient p2p protocol that only does diffs.

  • Can someone elucidate how this offers a different set of costs and benefits?
  • When should one use the Dat toolkit, and when should something like orbit be used.

will this work behind a mobile network NAT?

I assume it depends on the IPFS pub sub implementation, but as the users, you might know best - can you set up an orbitDB on two mobile network clients and store and read items freely?

Service Worker

I'm interested in making OrbitDB work in a Web Service Worker. This way:

  1. It would be in a different thread than the UI.
  2. One instance serves multiple tabs.
  3. It could redirect HTTP traffic so that you can respond with content in the database.

When I tried it briefly, I found that localStorage is not defined in the scope of the a Service Worker. I tried a hack of defining it myself, but ran into other issues and haven't returned. Has anyone else looked into this?

I think the transition could be painless from an API perspective since most calls are asynchronous you could just make them RPC's to the Service Worker.

Book Outline

This issue is the scratch pad for the outline and basis for discussion. Feel free to comment here to leave feedback at any time.


The following is extremely subject to change


The OrbitDB Field Manual

*Overall the book is written in 2nd person: You do this, you should know, etc. Parts 1 and 3 are practical, and Parts 2 and 4 are theoretical *

Introduction

Preamble [1p]

Sets the book up with a very loose but catchy narrative about our distributed future and why persistent-yet-malleable data structures are necessary.

What is OrbitDB? [3p]

Non-technical description of OrbitDB. For the technical, we point to Part 1 for a tutorial and part 3 for overall architecture. Briefly touch upon IPFS context.

What can I use it for [2p]

What has already been built with OrbitDB? What can one do with a distributed database like OrbitDB? What apps should be built? Points to Part 2 for ways of thinking distributed.

A Warning to Fellow Travelers in Our Distributed Future [1p]

This is alpha software. Not just OrbitDB, all of it - an entire industry. Be careful. Points to part 4 for Advanced Topics.

Part 1: The Tutorial

An upbeat, imperative romp from empty files to complete app in under 100 pages. Need an idea for an app. Constrained entirely to JavaScript to maximize focus - no HTML or CSS

Coding instructions is interleaved with sections called What Just Happened that tell people what just happened after their code ran.

Chapter 1: Getting Started

Instantiating IPFS and OrbitDB

  • Resolves #367

Creating a Database

Chapter 2: Reading and Writing Data

Choosing a data type

Chapter 3: Peer-to-Peer Replication

Replication Overview

Replicating in the Browser

Replicating in Node.js

Replication between Browser and Node.js

  • Resolves #496

Chapter 4: Identity and Permissions

Access Control

Identity Management

Security Disclosures

  • Resolves: #397
  • Resolves: #222
  • Resolves: #327
  • Resolves: #357
  • Resolves: #475
  • Resolves: #380
  • Resolves: #458
  • Resolves: #467

Part 2: Thinking Peer to Peer

What this is NOT

This is not Bittorrent

This is not Git

This is not Kazaa

Comparison to Traditional Database Systems

On Sharding

What this is

  • Resolves #536

On Performance and Scalability

Persistence and Validation

  • Resolves #36
  • Resolves #310

Interactions with the IPFS Ecosystem

Part 3: The Architecture of OrbitDB

  • Resolves #342 Data persistence on IPFS

ipfs-log

Describes CRDTs and Merkle-Dags

The stores

Keyvalue

Docstore

Counter

Log

Feed

Workshop: Creating Your Own Store

Part 4: What Next?

Appendix 1: CRDTs

Appendix 2: Mobile Ambients: A Peek Ahead

Unable to view db created on the web interface from program and vice versa

I am trying to create a database in one place, and access it somewhere else, but I am having trouble.

I can create a database using the web example (https://ipfs.io/ipfs/QmeESXh9wPib8Xz7hdRzHuYLDuEUgkYTSuujZ2phQfvznQ/), but when I try to access it from node, the line db = await orbitdb.log('/orbitdb/QmXiviShAQzzU6HFnBy8fC32GmmRXNPEqFoyiLjBzZMoBm/database') hangs indefinitely. I can create the database from scratch in my code without any issues, but when I try to access it from the web example or from the same code on another computer, it fails.

Any idea what I'm doing wrong?

Sharing an public keys for multiple nodes

Hi,

I am using orbit-db for a multi node application. Reading the docs section on access control, node can instantiate, generate their key, then a database is created with all the keys.

My question is, if I were to use one key (and share it among all nodes), is this safe? IE part of the code will get confused if multiple machines are running orbitdb with the same key.

The reason I want to share a key between nodes is because it makes the database creation simpler: instead of generating a key on each node and then creating a database, I can just generate one key and create the database then and there, then just use the same config everywhere.

Thanks!

FAQ: Database replication is not working

This is probably the most frequently asked question and it needs canonical answer as well as detailed "troubleshooting" steps as the reasons seem to vary. However, there are two major ones that I seen constantly popping up: using orbitdb in app code (eg. calling load() or listening for replicated event) and IPFS problems (which can be due to many reasons).

Leaving this open to collect all the relevant issues and answers as I'm pretty sure this has been answered in the past by multiple people to multiple questions around the topic.

Would love comments and contributions to link to the issues or a PR to gather the information to a document directly.

Possibly related issues orbitdb/orbitdb#264, orbitdb/orbit-db#349, orbitdb/orbitdb#315 and orbitdb/orbit-db#442. There may be others.

How do you set up a cluster?

It's clear from the readme how to set up a database that can be used locally, but I don't understand how to set up a cluster with high availability and durability? It's also not clear to me how database clients could connect to such a cluster without becoming a part of it.

Wall clock for feeds

Feeds come with what seems a Lamport clock, which mean items in a feed are ordered relatively (first element has a 0 timestamp, second 1 etc).

When implementing a Twitter-like app, different feeds from other users have to be replicated and presented to the user. This poses a problem when trying to visualize them as they were temporally ordered, because we have no way to know the exact order of the union of two feeds.

A way to solve this would be letting users specify the UTC timestamp of their posts, but that would be easily abusable (users setting the timestamp in the future or in the past, for example). While a cool functionality, it would disrupt the concept of timeline.

This issue is to discuss a way of time-ordering different feeds in a reliable way.

During the replication process, can I vet each entry individually before storing?

Hi!
I am interested in creating a network where nodes broadcast messages onto it and then the rest store them.
I need to ensure though that a node can not just broadcast garbage messages that don't contain a POW in them.

Is there any way for nodes in orbitdb to "vet" all entries in their database before they are stored to ensure their validity?

Thanks

Add license to PDF

I'm not sure that it is currently added to the PDF. Perhaps it should be included in metadata.yml?

How does orbitdb sync?

I have prepared a cluster of 2 nodes in IPFS with IPFS Daemon and swarm key.
I am running orbitdb and connecting to the IPFS node with "ipfs-api" in HTTP protocol.
I am able to post data through orbitdb (feed, keyvalue) from one node and am able to fetch it back in another orbitdb instance connected to the other node.
The issue is that when I am calling a "put" in one instance and trying to "get" it from another instance, its not able to "get" it the first time. But if the same "get" is being called from the same node the second time, its showing the data.

I am not sure how orbitdb sync works but below is the code that I am running.

Node 1-----------
const OrbitDB = require('orbit-db');
const IpfsApi = require('ipfs-api');
const ipfs = IpfsApi('localhost', '5002', {protocol: 'http'})
const orbitdb = new OrbitDB(ipfs);
const writePermissions = [orbitdb.key.getPublic('hex'), ];
const kv = await orbitdb.kvstore('new-db',{write:writePermissions});
await kv.load();
await kv.put('test', 80);

Node 2------------
const OrbitDB = require('orbit-db');
const IpfsApi = require('ipfs-api');
const ipfs = IpfsApi('localhost', '5002', {protocol: 'http'})
const orbitdb = new OrbitDB(ipfs);
const db = await orbitdb.open('/orbitdb/QmPJ4QXJDfBjPKB6JKZcqmrDfDxbubC85BMQvxDcR2Gunb/new-db');
await db.load();
db.get('test');

Please help!!!

Understanding the background of orbit-db

I try to understand how orbit-db actually works. Is there an every-growing chain of DB entries that every node maintains, and every node is publishing their HEAD to an IPFS pubsub room?

And at the same time, every node is also listening on that room, fetching the HEADs of others (or is there only one at a time?), merging it with theirs and pushing the result back?

[HELP] Is some IPFS previous setup required?

In order for this project to work, are there any setup steps required (initialize IPFS db externally...) before executing the app?

I just downloaded and executed it and this is what I get from the browser:
Uncaught Error: invalid multiaddr: /dns4/star-signal.cloud.ipfs.team/wss/p2p-webrtc-star/ipfs/QmVwQWBf6dmfgVn7buek9tGvtvN2rSvrh3GFZqqpjijZyX at cleanUrlSIO (index.min.js:1) at EventEmitter.listener.listen (index.min.js:1) at cb (index.min.js:1) at index.min.js:1 at index.min.js:1 at exports.default (index.min.js:1) at exports.default (index.min.js:1) at exports.default (index.min.js:1) at Object.listen (index.min.js:1) at each (index.min.js:1)

However, I can execute the version published in IPFS (https://ipfs.io/ipfs/QmTJGHccriUtq3qf3bvAQUcDUHnBbHNJG2x2FYwYUecN43/) fine and working, including opening two tabs and sharing data.

To clarify: the way that two browsers interact with the same DB is connecting to the same repo ('/orbitdb/examples/todomvc/ipfs/0.27.0'), right? I'm very new to IPFS and Orbit, so any help pointing me to mistakes/alternative directions would be much appreciated.

Does database replication require port-forwarding?

Do nodes need to have a port forwarded to accept incoming connections for orbit-eb database replication to occur between them? Could two nodes replicate their databases between each other if both of them didn't have port forwarding enabled and were on different networks?

Data persistence on IPFS

Hello, I'm just curious to know how is data persistence handled on IPFS, given that (if my understanding is correct) files must be "pinned" by an ipfs node in order to achieve persistance, yet the data is prone to loss (or unavailability) if it's hosted on a single node that goes offline.

Where can I check for details on how is persistence (and availability) ensured by OrbitDB? Also, on a related note, does OrbitDB work as a "private" cluster of ipfs nodes or is this connecting to the IPFS network as a whole via some public gateway?

Thanks beforehand for any feedback on the matter.

OrbitDB pinning service

Because orbitdb is based on js-ipfs, and js-ipfs does not yet have garbage collection, all orbit databases are persistent. In IPFS terms, orbit databases are pinned by default.

However, when run in the browser, an orbit database uses localStorage (and leveldb to cache). This is a surprisingly robust method of storage, but it is still a bit vulnerable. In addition, for a new browser user to sync to the orbit database, another user must be online simultaneously. A user who starts an orbit database and then switches devices will have the same problem.

As haad says, this is the classic bootstrapping problem.

It would be helpful to have an easy way to run a pinning service which will replicate a database and stand ready to sync with new users, to provide resilience and availability. This pinning service would be a cluster of orbit database servers, and so this issue is closely related to orbitdb/orbit-db#165, setting up an orbitdb cluster.

Large scale database

Hi,
I'm currently doing research on serverless databases and OrbitDB seems to be an amazing solution. Thanks for all this work.

However, I'm thinking about implementing a massive document store that could contains thousand and thousand of documents and make it searchable as much as possible. I looks like storing such content in a single OrbitDB docstore could potentially overload the node (browser) and make it crash ...

So I would like to know if a splitted implementation would make more sense and would scale better.

  • every document is a single OrbitDB docstore database such as "entity-01", "entity-02", ...
  • after inserting/updating a document, we update an index file (OrbitDB database) that only contains the IPFS hash of each document and a few searchable index fields.

Do you think this type of implementation can make sense and scale ?

Thanks.
Greg

Seems loading remote db is not working

const IPFS = require('ipfs')
const OrbitDB = require('orbit-db')

const ipfsOptions = {
    start: true,
    EXPERIMENTAL: {
      pubsub: true,
    },
    config: {
      Addresses: {
        Swarm: [
          '/dns4/ws-star.discovery.libp2p.io/tcp/443/wss/p2p-websocket-star'
        ]
      },
    }
  }

  const ipfs = new IPFS(ipfsOptions)

  ipfs.on('error', (e) => console.error(e))

  ipfs.on('ready', async () => {
    const orbitdb = new OrbitDB(ipfs)

    const db = await orbitdb.open('ajdb', {
        create: true, 
        overwrite: true,
        localOnly: false,
        type: 'keyvalue',
        write: ['*'],
      })
	  
    await db.load()
   
    await db.set('name', 'hello')

    const value = db.get('name')
    console.log(value)
 })

I'm testing this on two computers. Above code is in one computer.

Below code is in another. One computer creates the db and adds an entry to it another one tries to retrieve it.

const IPFS = require('ipfs')
  const OrbitDB = require('orbit-db')

  const ipfsOptions = {
    start: true,
    EXPERIMENTAL: {
      pubsub: true,
    },
    config: {
      Addresses: {
        Swarm: [
          '/dns4/ws-star.discovery.libp2p.io/tcp/443/wss/p2p-websocket-star',
        ]
      },
    }
  }

  const ipfs = new IPFS(ipfsOptions)

  ipfs.on('error', (e) => console.error(e))

  ipfs.on('ready', async () => {
    const orbitdb = new OrbitDB(ipfs)

    const db = await orbitdb.open('/orbitdb/QmcYbhRpK8pySXwpH5iTPTNna5aWBf2dE75UoLvC7ENFM5/ajdb', { sync: true })
    await db.load()

    const value = db.get('name')
    console.log(value)
  })

But second computer which is retrieving the entries (at least trying to) returns undefined. Not sure why i can't load a remote database.
I've also tried querying the database after the replication event but replication event is not triggering at all.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.